pyrfm.random_feature.LearningKernelwithRandomFeature¶
-
class
pyrfm.random_feature.
LearningKernelwithRandomFeature
(transformer=None, divergence='chi2', rho=1.0, alpha=None, max_iter=100, tol=1e-08, warm_start=False, max_iter_admm=10000, mu=10, tau_incr=2, tau_decr=2, eps_abs=0.0001, eps_rel=0.0001, verbose=True)[source]¶ Bases:
sklearn.base.BaseEstimator
,sklearn.base.TransformerMixin
Learns importance weights of random features by maximizing the kernel alignment with a divergence constraint.
- Parameters
transformer (sklearn transformer object (default=None)) – A random feature map object. If None, RBFSampler is used.
divergence (str (default='chi2')) –
Which f-divergence to use.
’chi2’: (p/q)^2 -1
’kl’: (p/q log(p/q))
’tv’: |p/q-1|/2
’squared’: 0.5*(p-q)^2
’reverse_kl’: (- log (p/q))
max_iter (int (default=100)) – Maximum number of iterations.
rho (double (default=1.)) – A upper bound of divergence. If rho=0, the importance weights will be 1/sqrt{n_components}.
alpha (double (default=None)) – A strenght hyperparameter for divergence. If not None, optimize the regularized objective (+ alpha * divergence). If None, optimize the constraied objective (divergence < rho).
tol (double (default=1e-8)) – Tolerance of stopping criterion.
scaling (bool (default=True)) – Whether to scaling the kernel alignment term in objective or not. If True, the kernel alignment term is scaled to [0, 1] by dividing the maximum kernel alignment value.
warm_start (bool (default=False)) – Whether to active warm-start or not.
max_iter_admm (int (default=10000)) – Maximum number of iterations in the ADMM optimization. This is used if divergence=’tv’ or ‘reverse_kl’.
mu (double (default=10)) – A parameter for the ADMM optimization. Larger mu updates the penalty parameter more frequently. This is used if divergence=’tv’ or ‘reverse_kl’.
tau_incr (double (default=2)) – A parameter for the ADMM optimization. The penalty parameter updated by multiplying tau_incr. This is used if divergence=’tv’ or ‘reverse_kl’.
tau_decr (double (default=2)) – A parameter for the ADMM optimization. The penalty parameter updated by dividing tau_decr. This is used if divergence=’tv’ or ‘reverse_kl’.
eps_abs (double (default=1e-4)) – A parameter for the ADMM optimization. It is used for stopping criterion. This is used if divergence=’tv’ or ‘reverse_kl’.
eps_rel (double (default=1e-4)) – A parameter for the ADMM optimization. It is used for stopping criterion. This is used if divergence=’tv’ or ‘reverse_kl’.
verbose (bool (default=True)) – Verbose mode or not.
-
importance_weights_
¶ The learned importance weights.
- Type
array, shape (n_components, )
-
divergence_
¶ The divergence instance for optimization.
- Type
Divergence instance
References
[1] Learning Kernels with Random Features. Aman Sinha and John Duchi. In NIPS 2016. (https://papers.nips.cc/paper/6180-learning-kernels-with-random-features.pdf)
-
fit
(X, y)[source]¶ Fitting the importance weights of each random basis.
- Parameters
X ({array-like, sparse matrix}, shape (n_samples, n_features)) – Training data, where n_samples is the number of samples and n_features is the number of features.
y (array, shape (n_samples, )) – Labels, where n_samples is the number of samples. y must be in {-1, 1}^n_samples.
- Returns
self – Returns the transformer.
- Return type
object
-
fit_transform
(X, y=None, **fit_params)¶ Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
- Parameters
X (numpy array of shape [n_samples, n_features]) – Training set.
y (numpy array of shape [n_samples]) – Target values.
- Returns
X_new – Transformed array.
- Return type
numpy array of shape [n_samples, n_features_new]
-
get_params
(deep=True)¶ Get parameters for this estimator.
- Parameters
deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
mapping of string to any
-
remove_bases
()[source]¶ Remove the useless random bases according to importance_weights_.
- Returns
self – Whether to remove bases or not.
- Return type
bool
-
set_params
(**params)¶ Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.- Returns
- Return type
self
-
transform
(X)[source]¶ Apply the approximate feature map to X.
- Parameters
X ({array-like, sparse matrix}, shape (n_samples, n_features)) – New data, where n_samples is the number of samples and n_features is the number of features.
- Returns
X_new
- Return type
array-like, shape (n_samples, n_components)