Creates a M1NNSearchlight to assess cross-validation classification performance of M1NN on all possible spheres of a certain size within a dataset.
The idea of taking advantage of naiveness of M1NN for the sake of quick searchlight-ing stems from Francisco Pereira (paper under review).
Parameters: | radius : float
center_ids : list of int
space : str
knn : kNN
generator : Generator
qe : QueryEngine
errorfx : func, optional
indexsum : (‘sparse’, ‘fancy’), optional
reuse_neighbors : bool, optional
enable_ca : None or list of str
disable_ca : None or list of str
queryengine : QueryEngine
null_dist : instance of distribution estimator
auto_train : bool
force_train : bool
postproc : Node instance, optional
descr : str
|
---|
Notes
If any BaseSearchlight is used as SensitivityAnalyzer one has to make sure that the specified scalar Measure returns large (absolute) values for high sensitivities and small (absolute) values for low sensitivities. Especially when using error functions usually low values imply high performance and therefore high sensitivity. This would in turn result in sensitivity maps that have low (absolute) values indicating high sensitivities and this conflicts with the intended behavior of a SensitivityAnalyzer.