-  3.39.22
Functions
matfaust::fact Namespace Reference

The matfaust factorization module. More...

Functions

function butterfly (M, varargin)
 Factorizes the matrix M according to a butterfly support and optionally a permutation using the algorithms described in [1]. More...
 
function eigtj (M, varargin)
 Performs an approximate eigendecomposition of M and returns the eigenvalues in D along with the corresponding normalized right eigenvectors (as the columns of the Faust object V). More...
 
function hierarchical (M, p, varargin)
 Factorizes the matrix M with Hierarchical Factorization using the parameters set in p. More...
 
function hierarchical_mhtp (M, hierarchical_p, mhtp_p, varargin)
 Runs the MHTP-PALM4MSA hierarchical factorization algorithm on the matrix M. More...
 
function palm4msa (M, p, varargin)
 Factorizes the matrix M with Palm4MSA algorithm using the parameters set in p. More...
 
function palm4msa_mhtp (M, palm4msa_p, mhtp_p, varargin)
 Runs the MHTP-PALM4MSA algorithm to factorize the matrix M. More...
 
function pinvtj (M, varargin)
 Computes the pseudoinverse of M using svdtj. More...
 
function svdtj (M, varargin)
 Performs an approximate singular value decomposition and returns the left and right singular vectors as Faust transforms. More...
 

Detailed Description

The matfaust factorization module.

This module gives access to the main factorization algorithms of FAuST. These algorithms can factorize a dense matrix to a sparse product (i.e. a Faust object).

There are several factorization algorithms.

Function Documentation

◆ butterfly()

function matfaust::fact::butterfly ( ,
varargin   
)

Factorizes the matrix M according to a butterfly support and optionally a permutation using the algorithms described in [1].

The result is a Faust F of the form BP where B has a butterfly structure and P is a permutation matrix determined by the optional parameter ‘perm'.

Parameters
Mthe matrix to factorize. It can be real (single or double, the class might have a large impact on performance) or complex. M must be square and its dimension must be a power of two.
'type',strthe type of factorization 'right'ward, 'left'ward or 'bbtree'. More precisely: if 'left' (resp. 'right') is used then at each stage of the factorization the most left factor (resp. the most right factor) is split in two. If 'bbtree' is used then the matrix is factorized according to a Balanced Binary Tree (which is faster as it allows parallelization).
'perm',valuefive kinds of values are possible for this argument.
  1. perm is an array of column indices of the permutation matrix P which is such that the returned Faust is F = B * P where B is the Faust butterfly approximation of M*P.'. If the array of indices is not a valid permutation the behaviour is undefined (however an invalid size or an out of bound index raise an exception).
  2. perm is a cell array of arrays of permutation column indices as defined in 1. In that case, all permutations passed to the function are used as explained in 1, each one producing a Faust, the best one (that is the best approximation of M in the Frobenius norm) is kept and returned by butterfly.
  3. perm is 'default_8', this is a particular case of 2. Eight default permutations are used. For the definition of those permutations please refer to [2].
  4. perm is 'bitrev': in that case the permutation is the bit-reversal permutation (cf. matfaust.tools.bitrev_perm).
  5. By default this argument is empty, no permutation is used (this is equivalent to using the identity permutation matrix in 1).
Parameters
'diag_opt',boolif true then the returned Faust is optimized using matfaust.opt_butterfly_faust.
'mul_perm',bool: decides if the permutation is multiplied into the rightest butterfly factor (mul_perm=true) or if this permutation is left apart as the rightest factor of the Faust (mul_perm=false). It can't be true if diag_opt is true (an error is raised otherwise). Defaultly, mul_perm is not set which implies that mul_perm is true iff diag_opt is false.
Note
Below is an example of how to create a permutation matrix from a permutation list of indices (as defined by the perm argument) and conversely how to convert a permutation matrix to a list of indices of permutation.
>> rng(42)
>> I = randperm(8) % random permutation as a list indices
I =
7 6 5 1 4 3 8 2
>> % convert a permutation as a list of indices to a permutation matrix P as a csr_matrix
>> n = numel(I);
>> P = sparse(I, 1:n, 1);
>> full(P)
ans =
0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 1
0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0
0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
>> [I_, ~, ~] = find(P);
>> I_ = I_.'
I_ =
7 6 5 1 4 3 8 2
>> all(I_ == I)
ans =
logical
1
Return values
Fthe Faust which is an approximation of M according to a butterfly support.

Examples:

>> import matfaust.wht
>> import matfaust.dft
>> import matfaust.fact.butterfly
>> H = full(wht(32));
>> F = butterfly(H, 'type', 'bbtree');
>> err = norm(F-H)/norm(H) < 1e-14
err =
logical
1
>> % it works with dft too!
>> % all you need is to specify the bit-reversal permutation
>> % since the Discrete Fourier Transform is the product of a butterfly factors with this particular permutation
>> DFT = full(dft(32));
>> F = butterfly(DFT, 'type', 'bbtree', 'perm', 'bitrev');
>> err = norm(full(F)-DFT)/norm(DFT) < 1e-14
err =
logical
1
>>

Use butterfly with simple permutations:

>> M = rand(4, 4);
>> % without any permutation
>> F1 = butterfly(M, 'type', 'bbtree');
>> % which is equivalent to using the identity permutation
>> p = 1:4;
>> F2 = butterfly(M, 'type', 'bbtree', 'perm', p);
>> % compute the relative diff
>> norm(F2-F1)/norm(F1)
%>
ans =
0
>> % then try another permutation
>> p2 = [2, 1, 4, 3];
>> F3 = butterfly(M, 'type', 'bbtree', 'perm', p2);

Use butterfly with a permutation defined by a list of indices J:

>> J = 32:-1:1;
>> F = butterfly(H, 'type', 'bbtree', 'perm', J)
F =
Faust size 32x32, density 0.3125, nnz_sum 320, 5 factor(s):
- FACTOR 0 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 1 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 2 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 3 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 4 (double) SPARSE, size 32x32, density 0.0625, nnz 64
>> % this is equivalent to passing a list containing a single permutation:
>> % F = butterfly(H, 'type', 'bbtree', 'perm', {J})

Use butterfly with successive permutations J1 and J2 and keep the best approximation:

>> J1 = J;
>> J2 = randperm(32);
>> F = butterfly(H, 'type', 'bbtree', 'perm', {J1, J2})
F =
Faust size 32x32, density 0.3125, nnz_sum 320, 5 factor(s):
- FACTOR 0 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 1 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 2 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 3 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 4 (double) SPARSE, size 32x32, density 0.0625, nnz 64
>>

Or to to use the 8 default permutations (keeping the best approximation resulting Faust)

>> F = butterfly(H, 'type', 'bbtree', 'perm', 'default_8')
F =
Faust size 32x32, density 0.3125, nnz_sum 320, 5 factor(s):
- FACTOR 0 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 1 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 2 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 3 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 4 (double) SPARSE, size 32x32, density 0.0625, nnz 64
>>

References:
[1] Leon Zheng, Elisa Riccietti, and Remi Gribonval, Hierarchical Identifiability in Multi-layer Sparse Matrix Factorization
[2] T. Dao, A. Gu, M. Eichhorn, A. Rudra, and C. Re, “Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations,” in Proceedings of the 36th International Conference on Machine Learning. June 2019, pp. 1517–1527, PMLR

See also: matfaust.wht, matfaust.dft, matfaust.rand_butterfly

◆ eigtj()

function matfaust::fact::eigtj ( ,
varargin   
)

Performs an approximate eigendecomposition of M and returns the eigenvalues in D along with the corresponding normalized right eigenvectors (as the columns of the Faust object V).

The output is such that V*D*V' approximates M. V is a product of Givens rotations obtained by truncating the Jacobi algorithm.

The trade-off between accuracy and complexity of V can be set through the parameters nGivens and tol that define the targeted number of Givens rotations and targeted error.

Usage

    Primary examples of calls include:
    [V,D] = eigtj(M, 'nGivens', n) outputs V as a Faust object made of n elementary Givens rotations grouped into factors containing ideally m = floor(size(M,1)/2) such Givens rotations, and the diagonal entries of D are the approximate eigenvalues in ascending order.
    [V,D] = eigtj(M,’tol’,0.01) same as above with n determined adaptively by a relative approximation error 0.01.
    [V,D] = eigtj(M,’tol’,0.01) same as above with n determined adaptively by a relative approximation error 0.01.
    [V,D] = eigtj(M,’tol’,0.01,’relerr’,False) same as above with an absolute approximation error.
    [V,D] = eigtj(M,’nGivens’,n,’tol’,0.01) same as above with a number of elementary Givens bounded by n even if the targeted approximation error is not achieved.
    [V,D]= eigtj(M,’nGivens’,n,’tol’,0.01,’nGivens_per_fac’,t) same as above with (up to) t Givens rotations per factor.
    [V,D]= eigtj(M,’nGivens’,n,’order’,’descend’) same as above where the diagonal entries of D are the approximate eigenvalues in descending order (and with columns of V permuted accordingly).
    eigtj(M, 'nGivens', n, 'nGivens_per_fac', t, 'tol', 0.01, 'relerr', true) uses a stopping criterion based on absolute error norm(V*D*V'-M, 'fro'). This criterion is concurrent to nGivens (here n).

Parameters
Mthe matrix to diagonalize. Must be real and symmetric, or complex hermitian. Can be in dense or sparse format. The class(M) value can be double or single (only if isreal(M) is true).
nGivens,integer[optional if tol is set] targeted number of Givens rotations. The number of rotations per factor of V is defined by nGivens_per_fac.
'tol',number[optional if nGivens is set] the tolerance error at which the algorithm stops. The default value is zero so that stopping is based on reaching the targeted nGivens.
'err_period',intit defines the period, in number of factors of V, the error is compared to tol (reducing the period spares some factors but increases slightly the computational cost because the error is computed more often).
'order',char[optional, default is ‘ascend’] order of eigenvalues, possible choices are ‘ascend, 'descend' or 'undef' (to avoid a sorting operation and save some time).
'nGivens_per_fac',integer[optional, default is floor(size(M, 1)/2)] targeted number of Givens rotations per factor of V. Must be an integer between 1 to floor(size(M, 1)/2).
'relerr',bool[optional, default is true] the type of error used as stopping criterion. (true) for the relative error norm(V*D*V'-M, 'fro')/norm(M, 'fro'), (false) for the absolute error norm(V*D*V'-M, 'fro').
'enable_large_Faust',bool[optional, default is false] if true, it allows to compute a transform that doesn't worth it regarding its complexity relatively to the matrix M. Otherwise, by default, an exception is raised before the algorithm starts.
'verbosity',integer[optional] verbosity level. The greater the value the more info is displayed. It can be helpful to understand for example why the algorithm stopped before reaching the tol error or the number of Givens (nGivens).
Return values
[V,D]
  • V the Faust object representing the approximate eigenvector transform. The column V(:, i) is the eigenvector corresponding to the eigenvalue D(i,i).
  • D the sparse (real) diagonal matrix of the approximate eigenvalues (by default in ascending order along the diagonal).
Note
  • When ‘nGivens’ and ‘tol’ are used simultaneously, the number of Givens rotations in V may be smaller than specified by ‘nGivens’ if the error criterion is met first, and the achieved error may be larger than specified if ‘nGivens’ is reached first during the iterations of the truncated Jacobi algorithm.
  • When nGivens_per_fac > 1, all factors have exactly nGivens_per_fac except the leftmost one which may have fewer if the total number of Givens rotations is not a multiple of nGivens_per_fact

Example

>> import matfaust.fact.eigtj
>> % get a Laplacian to diagonalize
>> load('Laplacian_256_community.mat')
>> % do it
>> [Uhat, Dhat] = eigtj(Lap, 'nGivens', size(Lap,1)*100, 'nGivens_per_fac', size(Lap, 1)/2, 'enable_large_Faust', true);
>> % Uhat is the Fourier matrix/eigenvectors approximation as a Faust (200 factors)
>> % Dhat the eigenvalues diagonal matrix approx.
>> % Computing the decomposition of the same matrix but targeting a precise relative error
>> [Uhat2, Dhat2] = eigtj(Lap, 'tol', 0.01);
>> assert(norm(Lap-Uhat2*Dhat2*Uhat2', 'fro')/norm(Lap, 'fro') < .011)
>> % and then asking for an absolute error
>> [Uhat3, Dhat3] = eigtj(Lap, 'tol', 0.1, 'relerr', false);
>> assert(norm(Lap-Uhat3*Dhat3*Uhat3', 'fro') < .11)
>> % now recompute Uhat2, Dhat2 but asking a descending order of eigenvalues
>> [Uhat4, Dhat4] = eigtj(Lap, 'tol', 0.01, 'order', 'descend');
>> assert(all(all(Dhat4(end:-1:1,end:-1:1) == Dhat2(1:end,1:end))))
>> % and now with no sort
>> [Uhat5, Dhat5] = eigtj(Lap, 'tol', 0.01, 'order', 'undef');
>> assert(all(sort(diag(Dhat5)) == diag(Dhat2)))

References:

  • [1] Le Magoarou L., Gribonval R. and Tremblay N., "Approximate fast graph Fourier transforms via multi-layer sparse approximations", IEEE Transactions on Signal and Information Processing over Networks 2018, 4(2), pp 407-420 https://hal.inria.fr/hal-01416110

See also fact.svdtj

◆ hierarchical()

function matfaust::fact::hierarchical ( ,
,
varargin   
)

Factorizes the matrix M with Hierarchical Factorization using the parameters set in p.

Parameters
Mthe dense matrix to factorize. The class(M) value can be double or single (only if isreal(M) is true). This class has a major impact on performance.
pis a set of factorization parameters. It might be a fully defined instance of parameters (matfaust.factparams.ParamsHierarchical) or a simplified expression which designates a pre-defined parametrization:
  • 'squaremat' to use pre-defined parameters typically used to factorize a Hadamard square matrix of order a power of two (see matfaust.demo.hadamard).
  • {'rectmat', j, k, s} to use pre-defined parameters used for instance in factorization of the MEG matrix which is a rectangular matrix of size m*n such that m < n (see matfaust.demo.bsl); j is the number of factors, k the sparsity of the main factor's columns, and s the sparsity of rows for all other factors except the residuum (that is the first factor here because the factorization is made toward the left – is_side_fact_left == true, cf. matfaust.factparams.ParamsHierarchical). The residuum has a sparsity of P*rho^(num_facts-1).
    By default, rho == .8 and P = 1.4. It's possible to set custom values with for example p == { 'rectmat', j, k, s, 'rho', .4, 'P', .7}.
    The sparsity is here the number of non-zero elements.
'backend',int(optional) the backend to use (the C++ implementation). Must be 2016 (the default) or 2020 (which should be faster for most of the factorizations).
'gpu',bool(optional) set to true to execute the algorithm using the GPU implementation. This option is only available when backend==2020.
Note
- If backend parameter is 2020 and regardless to the StoppingCriterion-s defined in p, it is possible to stop any internal call to PALM4MSA manually at any iteration by the key combination CTRL-C. The last Faust computed in the PALM4MSA instance will be used to continue the hierarchical factorization. A typical use case is when the verbose mode is enabled and you see that the error doesn't change anymore or only slightly, you might stop iterations by typing CTRL-C.
- The fully defined parameters (ParamsHierarchical instance) used/generated by the function are available in the return result (so one can consult what precisely mean the simplified parameterizations and possibly adjust the attributes to factorize again).
- This function has its shorthand matfaust.faust_fact(). For convenience you might use it like this:
import matfaust.*
F = faust_fact(M, p) % equiv. to matfaust.fact.hierarchical(M, p)
Return values
FThe Faust object result of the factorization.
[F,lambda]= palm4msa(M, p) when optionally getting lambda (scale).

Example 1: Fully Defined Parameters for a Random Matrix Factorization

>> import matfaust.factparams.*
>> import matfaust.fact.hierarchical
>> M = rand(500, 32);
>> fact_cons = ConstraintList('splin', 5, 500, 32, 'sp', 96, 32, 32, 'sp', 96, 32, 32);
>> res_cons = ConstraintList('normcol', 1, 32, 32, 'sp', 666, 32, 32, 'sp', 333, 32, 32);
>> % or alternatively you can use projectors-functors
>> % import matfaust.proj.*
>> % fact_cons = {splin([500,32], 5), sp([32,32], 96), sp([32,32], 96)}
>> % res_cons = {normcol([32,32], 1), sp([32,32], 666), sp([32,32], 333)}
>> stop_crit = StoppingCriterion(200);
>> stop_crit2 = StoppingCriterion(200);
>> params = ParamsHierarchical(fact_cons, res_cons, stop_crit, stop_crit2, 'is_update_way_R2L', false, 'init_lambda', 1.0);
>> F = hierarchical(M, params, 'backend', 2016)
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 1/3<br/>
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 2/3<br/>
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 3/3<br/>
F =
Faust size 500x32, density 0.189063, nnz_sum 3025, 4 factor(s):
- FACTOR 0 (double) SPARSE, size 500x32, density 0.15625, nnz 2500
- FACTOR 1 (double) SPARSE, size 32x32, density 0.09375, nnz 96
- FACTOR 2 (double) SPARSE, size 32x32, density 0.09375, nnz 96
- FACTOR 3 (double) SPARSE, size 32x32, density 0.325195, nnz 333
>>

Example 2: Simplified Parameters for Hadamard Factorization

>> import matfaust.*
>> import matfaust.fact.hierarchical
>> % generate a Hadamard Faust of size 32x32
>> FH = wht(32);
>> H = full(FH); % the full matrix version
>> % factorize it
>> FH2 = hierarchical(H, 'hadamard')
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 1/4
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 2/4
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 3/4
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 4/4
FH2 =
Faust size 32x32, density 0.3125, nnz_sum 320, 5 factor(s):
- FACTOR 0 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 1 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 2 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 3 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 4 (double) SPARSE, size 32x32, density 0.0625, nnz 64
>> % test the relative error
>> norm(FH-FH2, 'fro')/norm(FH, 'fro') < 1e-15 % the result is about 1e-16, the factorization is accurate doctest: +ELLIPSIS
ans =
logical %>
1
>>

Example 3: Simplified Parameters for a Rectangular Matrix Factorization (the BSL demo MEG matrix)

>> % in a matlab terminal
>> import matfaust.*
>> import matfaust.fact.hierarchical
>> load('matrix_MEG.mat')
>> MEG = matrix;
>> num_facts = 9;
>> k = 10;
>> s = 8;
>> MEG16 = hierarchical(MEG, {'rectmat', num_facts, k, s}) % too long for doctest, % doctest: +SKIP
MEG16 =
Faust size 204x8193, density 0.0631655, nnz_sum 105573, 9 factor(s):
- FACTOR 0 (double) SPARSE, size 204x204, density 0.293613, nnz 12219
- FACTOR 1 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 2 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 3 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 4 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 5 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 6 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 7 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 8 (double) SPARSE, size 204x8193, density 0.0490196, nnz 81930
>> % verify the constraint k == 10, on column 5
>> fac9 = factors(MEG16,9); % doctest: +SKIP
>> numel(nonzeros(fac9(:,5))) % doctest: +SKIP
ans =
10
>> % now verify the s constraint is respected on MEG16 factor 2
>> numel(nonzeros(factors(MEG16, 2)))/size(MEG16,1) % doctest: +SKIP
ans =
8
>>


Example 4: Simplified Parameters for Discrete Fourier Transform Factorization

>> import matfaust.*
>> import matfaust.fact.hierarchical
>> % generate a DFT Faust of size 32x32
>> FDFT = dft(32);
>> DFT = full(FDFT); % the full matrix version
>> % factorize it
>> FDFT2 = hierarchical(DFT, 'dft');
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 1/5
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 2/5
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 3/5
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 4/5
Faust::HierarchicalFact<FPP,DEVICE,FPP2>::compute_facts : factorization 5/5
>> % test the relative error
>> norm(FDFT-FDFT2, 'fro')/norm(FDFT, 'fro') < 1e-5 % the result is about 1e-6, the factorization is accurate doctest: +ELLIPSIS
ans =
logical %>
1
>> FDFT
FDFT =
Faust size 32x32, density 0.34375, nnz_sum 352, 6 factor(s):
- FACTOR 0 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 1 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 2 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 3 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 4 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 5 (complex) SPARSE, size 32x32, density 0.03125, nnz 32
>> FDFT2
FDFT2 =
Faust size 32x32, density 1.3125, nnz_sum 1344, 6 factor(s):
- FACTOR 0 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 1 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 2 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 3 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 4 (complex) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 5 (complex) SPARSE, size 32x32, density 1, nnz 1024

Example 5: Simplified Parameters for Hadamard Factorization without residual constraints

This factorization parameterization is the same as the one shown in 2. except that there is no constraints at all on residual factors. See matfaust.factparams.ParamsHierarchicalNoResCons and matfaust.factparams.ParamsHierarchicalWHTNoResCons for more details.

>> import matfaust.wht
>> import matfaust.fact.hierarchical
>> % generate a Hadamard Faust of size 32x32
>> FH = wht(32);
>> H = full(FH); % the full matrix version
>> % factorize it
>> FH2 = hierarchical(H, 'hadamard_simple', 'backend', 2020);
>> % test the relative error
>> norm(H-full(FH2), 'fro')/norm(H, 'fro')
ans =
0
>> FH2
FH2 =
Faust size 32x32, density 0.3125, nnz_sum 320, 5 factor(s):
- FACTOR 0 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 1 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 2 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 3 (double) SPARSE, size 32x32, density 0.0625, nnz 64
- FACTOR 4 (double) SPARSE, size 32x32, density 0.0625, nnz 64

Example 6: Simplified Parameters for a Rectangular Matrix Factorization (the BSL demo MEG matrix) without residual constraints

The factorization parameterization shown here is the same as in 3. except that there is no constraint at all on residual factors. See matfaust.factparams.ParamsHierarchicalNoResCons and matfaust.factparams.ParamsHierarchicalRectMatNoResCons for more details. In the example below the MEG matrix is factorized according to the parameterization shown in 3. (aka "MEG") and on the other hand with the parameterization of interest here (aka "MEG_SIMPLE", with no residual constraints), the approximate accuracy is quite the same so we can conclude that on this case (as in 5.) removing residual constraints can not only simplify the parameterization of hierarchical PALM4MSA but also be as efficient.

>> import matfaust.fact.hierarchical
>> MEG = load('matrix_MEG.mat');
>> MEG = MEG.matrix.';
>> F1 = hierarchical(MEG, {'MEG', 5, 10, 8}, 'backend', 2020)
Faust::hierarchical: 1/4
Faust::hierarchical: 2/4
Faust::hierarchical: 3/4
Faust::hierarchical: 4/4
F1 =
Faust size 204x8193, density 0.0697972, nnz_sum 116657, 5 factor(s):
- FACTOR 0 (double) DENSE, size 204x204, density 0.716816, nnz 29831
- FACTOR 1 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 2 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 3 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 4 (double) SPARSE, size 204x8193, density 0.0490196, nnz 81930
>> F2 = hierarchical(MEG, {'MEG_SIMPLE', 5, 10, 8}, 'backend', 2020)
Faust::hierarchical: 1/4
Faust::hierarchical: 2/4
Faust::hierarchical: 3/4
Faust::hierarchical: 4/4
F2 =
Faust size 204x8193, density 0.0697972, nnz_sum 116657, 5 factor(s):
- FACTOR 0 (double) DENSE, size 204x204, density 0.716816, nnz 29831
- FACTOR 1 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 2 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 3 (double) SPARSE, size 204x204, density 0.0392157, nnz 1632
- FACTOR 4 (double) SPARSE, size 204x8193, density 0.0490196, nnz 81930
>> % compare the errors:
>> norm(MEG-full(F2), 'fro') / norm(MEG, 'fro')
ans =
0.1303
>> norm(MEG-full(F1), 'fro') / norm(MEG, 'fro')
ans =
0.1260
>>

See also matfaust.faust_fact, factparams.ParamsHierarchical, factparams.ParamsHierarchicalWHT, factparams.ParamsHierarchicalRectMat,factparams.ParamsHierarchicalDFT

◆ hierarchical_mhtp()

function matfaust::fact::hierarchical_mhtp ( ,
hierarchical_p  ,
mhtp_p  ,
varargin   
)

Runs the MHTP-PALM4MSA hierarchical factorization algorithm on the matrix M.

This algorithm uses the MHTP-PALM4MSA (matfaust.fact.palm4msa_mhtp) instead of only PALM4MSA as matfaust.fact.hierarchical.

Parameters
Mthe dense matrix to factorize.
hierarchical_pis a set of factorization parameters. See matfaust.fact.hierarchical.
mhtp_pthe matfaust.factparams.MHTPParams instance to define the MHTP algorithm parameters.
vararginsee matfaust.fact.hierarchical for the other parameters.
Return values
FThe Faust object result of the factorization.
[F,lambda]= palm4msa(M, p) when optionally getting lambda (scale).

Example

>> import matfaust.fact.hierarchical_mhtp
>> import matfaust.proj.*
>> M = rand(500,32);
>> fact_projs = { splin([500,32], 5), sp([32,32], 96), sp([32, 32], 96)};
>> res_projs = { normcol([32,32], 1), sp([32,32], 666), sp([32, 32], 333)};
>> stop_crit1 = StoppingCriterion(200);
>> stop_crit2 = StoppingCriterion(200);
>> % 50 iterations of MHTP will run every 100 iterations of PALM4MSA (each time PALM4MSA is called by the hierarchical algorithm)
>> mhtp_param = MHTPParams('num_its', 150, 'palm4msa_period', 100);
>> param = ParamsHierarchical(fact_projs, res_projs, stop_crit1, stop_crit2);
>> F = hierarchical_mhtp(M, param, mhtp_param)
F =
Faust size 500x32, density 0.189063, nnz_sum 3025, 4 factor(s):
- FACTOR 0 (double) SPARSE, size 500x32, density 0.15625, nnz 2500
- FACTOR 1 (double) SPARSE, size 32x32, density 0.09375, nnz 96
- FACTOR 2 (double) SPARSE, size 32x32, density 0.09375, nnz 96
- FACTOR 3 (double) SPARSE, size 32x32, density 0.325195, nnz 333
>>

◆ palm4msa()

function matfaust::fact::palm4msa ( ,
,
varargin   
)

Factorizes the matrix M with Palm4MSA algorithm using the parameters set in p.

Parameters
Mthe dense matrix to factorize. The class(M) value can be double or single (only if isreal(M) is true). This class has a major impact on performance.
pthe matfaust.factparams.ParamsPalm4MSA instance to define the algorithm parameters.
'backend',int(optional) the backend to use (the C++ implementation). Must be 2016 (the default) or 2020 (which should be faster for most of the factorizations).
'gpu',bool(optional) set to true to execute the algorithm using the GPU implementation. This option is only available when backend==2020.
Note
If backend parameter is 2020 and independently to the StoppingCriterion defined in p, it is possible to stop the algorithm manually at any iteration by the key combination CTRL-C. The last Faust computed in the factorization process will be returned. A typical use case is when the verbose mode is enabled and you see that the error doesn't change anymore or only slightly, you might stop iterations by typing CTRL-C.
Return values
Fthe Faust object result of the factorization.
[F,lambda]= palm4msa(M, p) when optionally getting lambda (scale).

Example

>> import matfaust.factparams.*
>> import matfaust.fact.palm4msa
>> M = rand(500, 32);
>> cons = ConstraintList('splin', 5, 500, 32, 'normcol', 1, 32, 32);
>> % or alternatively, using the projectors
>> % import matfaust.proj.*
>> % cons = {splin([500,32], 5), normcol([32,32], 1)};
>> stop_crit = StoppingCriterion(200);
>> params = ParamsPalm4MSA(cons, stop_crit, 'is_update_way_R2L', false, 'init_lambda', 1.0);
>> F = palm4msa(M, params, 'backend', 2016)
F =
Faust size 500x32, density 0.22025, nnz_sum 3524, 2 factor(s):
- FACTOR 0 (double) SPARSE, size 500x32, density 0.15625, nnz 2500
- FACTOR 1 (double) DENSE, size 32x32, density 1, nnz 1024

See also matfaust.factparams.ParamsPalm4msaWHT to factorize a Hadamard matrix using the SKPERM projector.

◆ palm4msa_mhtp()

function matfaust::fact::palm4msa_mhtp ( ,
palm4msa_p  ,
mhtp_p  ,
varargin   
)

Runs the MHTP-PALM4MSA algorithm to factorize the matrix M.

MHTP stands for Multilinear Hard Tresholding Pursuit. This is a generalization of the Bilinear HTP algorithm describe in [1].

[1] Quoc-Tung Le, Rémi Gribonval. Structured Support Exploration For Multilayer Sparse Matrix Fac- torization. ICASSP 2021 - IEEE International Conference on Acoustics, Speech and Signal Processing, Jun 2021, Toronto, Ontario, Canada. pp.1-5. hal-03132013

Parameters
Mthe dense matrix to factorize.
palm4msa_pthe matfaust.factparams.ParamsPalm4MSA instance to define the PALM4MSA algorithm parameters.
mthp_pthe matfaust.factparams.MHTPParams instance to define the MHTP algorithm parameters.
vararginsee matfaust.fact.palm4msa for the other parameters.
Return values
Fthe Faust object result of the factorization.
[F,lambda]= palm4msa(M, p) when optionally getting lambda (scale).

Example

>> % in a matlab terminal
>> import matfaust.fact.palm4msa_mhtp
>> import matfaust.factparams.*
>> import matfaust.proj.*
>> M = rand(500,32);
>> projs = { splin([500,32], 5), normcol([32,32], 1.0)};
>> stop_crit = StoppingCriterion(200);
>> param = ParamsPalm4MSA(projs, stop_crit);
>> mhtp_param = MHTPParams('num_its', 60, 'palm4msa_period', 10);
>> G = palm4msa_mhtp(M, param, mhtp_param)
G =
Faust size 500x32, density 0.18225, nnz_sum 2916, 2 factor(s):
- FACTOR 0 (double) SPARSE, size 500x32, density 0.15625, nnz 2500
- FACTOR 1 (double) SPARSE, size 32x32, density 0.40625, nnz 416
>>

◆ pinvtj()

function matfaust::fact::pinvtj ( ,
varargin   
)

Computes the pseudoinverse of M using svdtj.

Parameters
Mthe matrix to compute the pseudoinverse.
nGivens,int|matrixsee fact.svdtj
'tol',numbersee fact.svdtj
'relerr',boolsee fact.svdtj
'nGivens_per_fac',intsee fact.svdtj
'enable_large_Faust',boolsee fact.svdtj
'err_period',intsee fact.svdtj
Return values
[V,Sp,Uh]such that V*Sp*Uh is the approximate of M^+ with:
  • Sp: (sparse real diagonal matrix) the pseudoinverse of the matrix of the singular values.
  • V, Uh: (Faust objects) orthonormal transforms.

Example

>> import matfaust.fact.pinvtj
>> rng(42) % for reproducibility
>> M = rand(128, 64);
>> [V, Sp, Uh] = pinvtj(M, 'tol', 1.5e-2);
>> norm(V * Sp * Uh * M - eye(64)) / norm(eye(64))
ans =
0.0036

◆ svdtj()

function matfaust::fact::svdtj ( ,
varargin   
)

Performs an approximate singular value decomposition and returns the left and right singular vectors as Faust transforms.

Note
this function is based on fact.eigtj which relies on the truncated Jacobi algorithm, hence the 'tj' in the name. See below the examples for further details on how svdtj is defined using eigtj. It's noteworthy that svdtj is still in experimental status, for example it exists cases for which it won't converge (to an abitrary precision, see tol argument). Another important thing is that svdtj needs to compute M.M' and M'.M which is not particularly advisable when M is large and dense.

Usage

    Primary examples of calls include:
    [U,S,V] = svdtj(M, ‘nGivens’,n) outputs U,V as Faust objects made of n elementary Givens rotations grouped into factors. By default each factor gathers (up to) m = floor(size(M,1)/2) such rotations. By default vector S contains the approximate singular values in any order.
    [U,S,V] = svdtj(M,’tol’,0.01) same as above with n determined adaptively by a relative approximation error 0.01
    [U,S,V] = svdtj(M,’tol’,0.01,’relerr’,false) same as above with an absolute approximation error
    [U,S,V] = svdtj(M,’nGivens’,n,’tol’,0.01) same as above with a number of elementary Givens bounded by n even if the targeted approximation error is not achieved
    [U,S,V] = svdtj(M,’nGivens’,n,’tol’,0.01,’nGivens_per_fac’,t) same as above with (up to) t Givens rotations per factor

Parameters
Ma real or complex, dense or sparse matrix. The class(M) value can be double or single (only if isreal(M) is true).
nGivens,int|matrixdefines the number of Givens rotations that will be used at most to compute U and V. If it is an integer, it will apply both to U and V. If it is a tuple of two integers as nGivens = [JU, JV], JU will be the limit number of rotations for U and JV the same for V. nGivens argument is optional if tol is set but becomes mandatory otherwise.
'tol',numberthis is the error target on U' S V against M. if error <= tol, the algorithm stops. See relerr below for the error formula. This argument is optional if nGivens is set, otherwise it becomes mandatory.
'relerr',booldefines the type of error used to evaluate the stopping criterion defined by tol. true for a relative error ( \(\| U' S V - M\|_F \over \| M \|_F\)) otherwise this is an absolute error ( \(\| U' S V - M\|_F \)).
'nGivens_per_fac',intthis argument is the number of Givens rotations to set at most by factor of U and V. If this is an integer it will be the same number of U and V. Otherwise, if it is a tuple of integers [tU, tV], tU will be the number of Givens rotations per factor for U and tV the same for V. By default, this parameter is maximized for U and V, i.e. tU = size(M, 1) / 2, tV = size(M, 2) / 2.
'enable_large_Faust',boolsee fact.eigtj
'err_period',intit defines the period, in number of factors of U or V, the error is compared to tol (reducing the period spares some factors but increases slightly the computational cost because the error is computed more often).
Note
In order to speed up the error computation of the approximate Faust, the algorithm follows a two-times strategy:
  1. In the first iterations a rough error but less costly error is computed \(\| M \|^2_F - \| S \|^2_F\)
  2. In the latest iterations the precise error (relative or absolute) indicated above (see relerr argument) is computed to reach the targeted accuracy. Don't forget that the err_period argument determines the frequency at which the error is calculated, you might decrease its value in order to obtain an error closer to what you asked. In last resort you can disable the two-times strategy by setting the environment related variable like this (defaulty the value is '0', which means that the two-times strategy is used).
    setenv('SVDTJ_ALL_TRUE_ERR', '1')
Return values
[U,S,V]such that U*S*V' is the approximate of M with:
  • S: (sparse real diagonal matrix) the singular values in any order.
  • U, V: (Faust objects) orthonormal transforms.

Example

>> import matfaust.fact.svdtj
>> rng(42) % for reproducibility
>> M = rand(16, 32);
>> % Factoring by specifying the numebr of Givens rotations
>> [U1, S1, V1] = svdtj(M, 'nGivens', 4096, 'enable_large_Faust', true);
>> % verify the approximate is accurate
>> norm(U1 * S1 * V1' - M) / norm(M)
ans =
3.1328e-15
>> % Specifying a different number of rotations for U and V
>> % Because U is smaller it should need less rotations
>> [U2, S2, V2] = svdtj(M, 'nGivens', [2400, 3200], 'enable_large_Faust', true);
>> norm(U2 * S2 * V2' - M) / norm(M)
ans =
3.1141e-15
>> % Factoring according to an approximate accuracy target
>> [U3, S3, V3] = svdtj(M, 'tol', 1e-12, 'enable_large_Faust', false);
Warning: eigtj stopped on U approximate which is too long to be worth it, set enable_large_Faust to true if you want to continue anyway%> Warning: eigtj stopped on V approximate which is too long to be worth it, set enable_large_Faust to true if you want to continue anyway%> >> norm(U3 * S3 * V3' - M) / norm(M)
ans =
1.5950e-13
>> % try with an absolute toelrance (the previous one was relative to M norm)
>> [U4, S4, V4] = svdtj(M, 'tol', 1e-6, 'relerr', false, 'enable_large_Faust', true);
>> % verify the absolute error is lower than 1e-6
>> norm(U4 * S4 * V4' - M)
ans =
3.6203e-14
>> % try a less accurate approximate to get less factors
>> [U5, S5, V5] = svdtj(M, 'nGivens', [256, 512], 'tol', 1e-1, 'enable_large_Faust', false);
>> norm(U5 * S5 * V5' - M) / norm(M)
ans =
0.0500
>> %%% Now let's see the lengths of the different U, V Fausts
>> length(V1) % it should be 4096 / nGivens_per_fac, which is (size(M, 2) / 2) = 256
ans =
256
>> length(U1) % it should be 4096 / nGivens_per_fac, which is (size(M, 1) / 2) = 512
ans =
100
>> % but it is not, svdtj stopped automatically on U1 because its error stopped enhancing
>> % (it can be verified with: 'verbosity', 1)
>> [length(U2), length(V2)]
ans =
100 200
>> [length(U3), length(V3)]
ans =
64 256
>> [length(U4), length(V4)]
ans =
100 200
>> % not surprisingly U5 and V5 use the smallest number of factors (nGivens and tol were the smallest)
>> [length(U5), length(V5)]
ans =
32 32

Explanations:

If we call svdtj on the matrix M, it makes two internal calls to eigtj.

In Matlab it would be:

  1. [D1, W1] = eigtj(M*M, varargin)
  2. [D2, W2] = eigtj(M'*M, varargin)

It gives the following equalities (ignoring the fact that eigtj computes approximations):

\[ W_1 D_1 W_1^* = M M^* \]

\[ W_2 D_2 W_2^* = M^* M \]

But because of the SVD \( M = USV^* \) we also have:

\[MM^* = U S V^* V S U^* = U S^2 U^* = W_1 D_1 W_1^*\]

\[M^* M = V S U^* U S V^* = V S^2 V^* = W_2 D_2 W_2^*\]

It allows to identify the left singular vectors of M to W1, and likewise the right singular vectors to W2.

To compute a consistent approximation of S we observe that U and V are orthogonal/unitary hence \( S = U^* M V \) so we ignore the off-diagonal coefficients of the approximation and take \( S = diag(U^* M V) \approx diag(W_1^* M W_2)\)

See also fact.eigtj, fact.pinvtj

matfaust::proj
This module provides matrix projectors.
Definition: +proj/@anticirc/anticirc.m:1
matfaust::fact::hierarchical
function hierarchical(M, p, varargin)
Factorizes the matrix M with Hierarchical Factorization using the parameters set in p.
matfaust::dft
function dft(n, varargin)
Constructs a Faust F implementing the Discrete Fourier Transform (DFT) of order n.
matfaust::wht
function wht(n, varargin)
Constructs a Faust implementing the Walsh-Hadamard Transform (WHT) of order n.
pyfaust.norm
def norm(F, ord='fro', **kwargs)
Returns Faust.norm(F, ord) or numpy.linalg.norm(F, ord) depending of F type.
Definition: __init__.py:3894
matfaust::fact::pinvtj
function pinvtj(M, varargin)
Computes the pseudoinverse of M using svdtj.
matfaust::eye
function eye(varargin)
Identity Faust.
matfaust::fact::palm4msa_mhtp
function palm4msa_mhtp(M, palm4msa_p, mhtp_p, varargin)
Runs the MHTP-PALM4MSA algorithm to factorize the matrix M.
matfaust::fact::svdtj
function svdtj(M, varargin)
Performs an approximate singular value decomposition and returns the left and right singular vectors ...
matfaust::factparams::StoppingCriterion
This class defines a StoppingCriterion for the FAuST's algorithms.
Definition: StoppingCriterion.m:21
matfaust::fact::palm4msa
function palm4msa(M, p, varargin)
Factorizes the matrix M with Palm4MSA algorithm using the parameters set in p.
matfaust
The FAuST Matlab Wrapper
Definition: bsl.m:1
matfaust::faust_fact
function faust_fact(varargin)
This function is a shorthand for matfaust.fact.hierarchical().
matfaust::fact::eigtj
function eigtj(M, varargin)
Performs an approximate eigendecomposition of M and returns the eigenvalues in D along with the corre...
matfaust::rand
function rand(M, N, varargin)
Generates a random Faust.
matfaust::factparams::MHTPParams
This class defines the set of parameters to run the MHTP-PAL4MSA algorithm.
Definition: MHTPParams.m:13
matfaust::fact::hierarchical_mhtp
function hierarchical_mhtp(M, hierarchical_p, mhtp_p, varargin)
Runs the MHTP-PALM4MSA hierarchical factorization algorithm on the matrix M.
matfaust::fact
The matfaust factorization module.
Definition: butterfly.m:1
matfaust::version
function version()
Returns the FAuST package version.
pyfaust.fact.hierarchical
def hierarchical(M, p, ret_lambda=False, ret_params=False, backend=2016, on_gpu=False)
Factorizes the matrix M using the hierarchical PALM4MSA algorithm according to the parameters set in ...
Definition: fact.py:1014
matfaust::factparams::ParamsHierarchical
The parent class to set input parameters for the hierarchical factorization algorithm.
Definition: ParamsHierarchical.m:10
matfaust::factparams
The module for the parametrization of FAuST's algorithms (Palm4MSA and Hierarchical Factorization)
matfaust::fact::butterfly
function butterfly(M, varargin)
Factorizes the matrix M according to a butterfly support and optionally a permutation using the algor...