torchnmf.trainer
torchnmf.trainer
is a package implementing various parameter updating algorithms for NMF, and is based on
the same optimizer interface from torch.optim
.
Taking an update step
Because current available trainer reevaluate the function multiple times, a closure function is required in each step. The closure should clear the gradients, compute output (or even the loss), and return it.
for i in range(iterations):
def closure():
trainer.zero_grad()
return target, model()
trainer.step(closure)
For torchnmf.trainer.SparsityProj
:
for i in range(iterations):
def closure():
trainer.zero_grad()
output = model()
loss = loss_fn(output, target)
return loss
trainer.step(closure)
Algorithms
- class torchnmf.trainer.BetaMu(params, beta=1, l1_reg=0, l2_reg=0, orthogonal=0)[source]
Implements the classic multiplicative updater for NMF models minimizing β-divergence.
Note
To use this optimizer, not only make sure your model parameters are non-negative, but the gradients along the whole computational graph are always non-negative.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
beta (float, optional) – beta divergence to be minimized, measuring the distance between target and the NMF model. Default:
1.
l1_reg (float, optional) – L1 regularize penalty. Default:
0.
.l2_reg (float, optional) – L2 regularize penalty (weight decay). Default:
0.
orthogonal (float, optional) – orthogonal regularize penalty. Default:
0.
- class torchnmf.trainer.SparsityProj(params, sparsity, dim=1, max_iter=10)[source]
Implements parseness constrainted gradient projection method described in Non-negative Matrix Factorization with Sparseness Constraints.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
sparsity (float) – the target sparseness for params, with 0 < sparsity < 1
dim (int, optional) – dimension over which to compute the sparseness for each parameter. Default:
1
max_iter (int, optional) – maximal number of function evaluations per optimization step. Default:
10