pyrfm.random_feature.CountSketch¶
-
class
pyrfm.random_feature.
CountSketch
(n_components=100, degree=2, random_state=None, dense_output=False)[source]¶ Bases:
sklearn.base.BaseEstimator
,sklearn.base.TransformerMixin
Approximates feature map of the linear kernel by Count Sketch, a.k.a feature hashing.
- Parameters
n_components (int (default=100)) – Number of Monte Carlo samples per original features. Equals the dimensionality of the computed (mapped) feature space.
random_state (int, RandomState instance or None, optional (default=None)) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
dense_output (bool (default=True)) – If dense_output = False and X is sparse matrix, output random feature matrix will become sparse matrix.
-
hash_indices_
¶ Hash table that represents the embedded indices.
- Type
np.array, shape (n_features)
-
hash_signs_
¶ Sign array.
- Type
np.array, shape (n_features)
-
random_weights_
¶ Projection matrix created by hash_indices_ and hash_signs_.
- Type
np.array, shape (n_features)
References
[1] Finding Frequent Items in Data Streams. Moses Charikar, Kevin Chen, and Martin Farach-Colton. In ICALP 2002. (https://www.cs.rutgers.edu/~farach/pubs/FrequentStream.pdf)
[2] Fast and scalable polynomial kernels via explicit feature maps. Ninh Pham and Rasmus Pagh. In KDD 2013. (http://chbrown.github.io/kdd-2013-usb/kdd/p239.pdf)
[3] Feature Hashing for Large Scale Multitask Learning. Kilian Weinberger, Anirban Dasgupta ANIRBAN, John Langford, Alex Smola, and Josh Attenberg. In ICML 2009. (http://alex.smola.org/papers/2009/Weinbergeretal09.pdf)
-
fit
(X, y=None)[source]¶ Generate hash functions according to n_features.
- Parameters
X ({array-like, sparse matrix}, shape (n_samples, n_features)) – Training data, where n_samples is the number of samples and n_features is the number of features.
- Returns
self – Returns the transformer.
- Return type
object
-
fit_transform
(X, y=None, **fit_params)¶ Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
- Parameters
X (numpy array of shape [n_samples, n_features]) – Training set.
y (numpy array of shape [n_samples]) – Target values.
- Returns
X_new – Transformed array.
- Return type
numpy array of shape [n_samples, n_features_new]
-
get_params
(deep=True)¶ Get parameters for this estimator.
- Parameters
deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
mapping of string to any
-
set_params
(**params)¶ Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.- Returns
- Return type
self
-
transform
(X)[source]¶ Apply the approximate feature map to X.
- Parameters
X ({array-like, sparse matrix}, shape (n_samples, n_features)) – New data, where n_samples is the number of samples and n_features is the number of features.
- Returns
X_new
- Return type
array-like, shape (n_samples, n_components)