Weka

weka-dev

nz.ac.waikato.cms.weka : weka-dev

The Waikato Environment for Knowledge Analysis (WEKA), a machine learning workbench. This version represents the developer version, the "bleeding edge" of development, you could say. New functionality gets added to this version.

Last Version: 3.9.6

Release Date:

weka-stable

nz.ac.waikato.cms.weka : weka-stable

The Waikato Environment for Knowledge Analysis (WEKA), a machine learning workbench. This is the stable version. Apart from bugfixes, this version does not receive any other breaking updates.

Last Version: 3.8.6

Release Date:

LibSVM

nz.ac.waikato.cms.weka : LibSVM

A wrapper class for the libsvm tools (the libsvm classes, typically the jar file, need to be in the classpath to use this classifier). LibSVM runs faster than SMO since it uses LibSVM to build the SVM classifier. LibSVM allows users to experiment with One-class SVM, Regressing SVM, and nu-SVM supported by LibSVM tool. LibSVM reports many useful statistics about LibSVM classifier (e.g., confusion matrix,precision, recall, ROC score, etc.)

Last Version: 1.0.10

Release Date:

distributedWekaBase

nz.ac.waikato.cms.weka : distributedWekaBase

This package provides generic configuration class and distributed map/reduce style tasks for Weka

Last Version: 1.0.17

Release Date:

classifierBasedAttributeSelection

nz.ac.waikato.cms.weka : classifierBasedAttributeSelection

This package provides two classes - one for evaluating the merit of individual attributes using a classifier (ClassifierAttributeEval), and second for evaluating the merit of subsets of attributes using a classifier (ClassifierSubsetEval). Both invoke a user-specified classifier to perform the evaluation, either under cross-validation or on the training data.

Last Version: 1.0.5

Release Date:

multiInstanceFilters

nz.ac.waikato.cms.weka : multiInstanceFilters

A collection of filters for manipulating multi-instance data. Includes PropositionalToMultiInstance, MultiInstanceToPropositional, MILESFilter and RELAGGS. For more information see: M.-A. Krogel, S. Wrobel: Facets of Aggregation Approaches to Propositionalization. In: Work-in-Progress Track at the Thirteenth International Conference on Inductive Logic Programming (ILP), 2003. Y. Chen, J. Bi, J.Z. Wang (2006). MILES: Multiple-instance learning via embedded instance selection. IEEE PAMI. 28(12):1931-1947. James Foulds, Eibe Frank: Revisiting multiple-instance learning via embedded instance selection. In: 21st Australasian Joint Conference on Artificial Intelligence, 300-310, 2008.

Last Version: 1.0.10

Release Date:

multiInstanceLearning

nz.ac.waikato.cms.weka : multiInstanceLearning

A collection of multi-instance learning classifiers. Includes the Citation KNN method, several variants of the diverse density method, support vector machines for multi-instance learning, simple wrappers for applying standard propositional learners to multi-instance data, decision tree and rule learners, and some other methods.

Last Version: 1.0.9

Release Date:

rotationForest

nz.ac.waikato.cms.weka : rotationForest

An ensemble learning method inspired by bagging and random sub-spaces. Trains an ensemble of decision trees on random subspaces of the data, where each subspace has been transformed using principal components analysis.

Last Version: 1.0.3

Release Date:

SMOTE

nz.ac.waikato.cms.weka : SMOTE

Resamples a dataset by applying the Synthetic Minority Oversampling TEchnique (SMOTE). The original dataset must fit entirely in memory. The amount of SMOTE and number of nearest neighbors may be specified. For more information, see Nitesh V. Chawla et. al. (2002). Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research. 16:321-357.

Last Version: 1.0.3

Release Date:

XMeans

nz.ac.waikato.cms.weka : XMeans

Cluster data using the X-means algorithm. X-Means is K-Means extended by an Improve-Structure part In this part of the algorithm the centers are attempted to be split in its region. The decision between the children of each center and itself is done comparing the BIC-values of the two structures. For more information see: Dan Pelleg, Andrew W. Moore: X-means: Extending K-means with Efficient Estimation of the Number of Clusters. In: Seventeenth International Conference on Machine Learning, 727-734, 2000.

Last Version: 1.0.6

Release Date:

decorate

nz.ac.waikato.cms.weka : decorate

DECORATE is a meta-learner for building diverse ensembles of classifiers by using specially constructed artificial training examples. Comprehensive experiments have demonstrated that this technique is consistently more accurate than the base classifier, Bagging and Random Forests. Decorate also obtains higher accuracy than Boosting on small training sets, and achieves comparable performance on larger training sets. For more details see: P. Melville, R. J. Mooney: Constructing Diverse Classifier Ensembles Using Artificial Training Examples. In: Eighteenth International Joint Conference on Artificial Intelligence, 505-510, 2003; P. Melville, R. J. Mooney (2004). Creating Diversity in Ensembles Using Artificial Data. Information Fusion: Special Issue on Diversity in Multiclassifier Systems.

Last Version: 1.0.3

Release Date:

discriminantAnalysis

nz.ac.waikato.cms.weka : discriminantAnalysis

Currently only contains Fisher's Linear Discriminant Analysis.

Last Version: 1.0.3

Release Date:

fastCorrBasedFS

nz.ac.waikato.cms.weka : fastCorrBasedFS

Feature selection method based on correlation measureand relevance and redundancy analysis. Use in conjunction with an attribute set evaluator (SymmetricalUncertAttributeEval). For more information see: Lei Yu, Huan Liu: Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution. In: Proceedings of the Twentieth International Conference on Machine Learning, 856-863, 2003.

Last Version: 1.0.2

Release Date:

normalize

nz.ac.waikato.cms.weka : normalize

An instance filter that normalize instances considering only numeric attributes and ignoring class index

Last Version: 1.0.2

Release Date:

optics_dbScan

nz.ac.waikato.cms.weka : optics_dbScan

The OPTICS and DBScan clustering algorithms. Martin Ester, Hans-Peter Kriegel, Joerg Sander, Xiaowei Xu: A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In: Second International Conference on Knowledge Discovery and Data Mining, 226-231, 1996; Mihael Ankerst, Markus M. Breunig, Hans-Peter Kriegel, Joerg Sander: OPTICS: Ordering Points To Identify the Clustering Structure. In: ACM SIGMOD International Conference on Management of Data, 49-60, 1999.

Last Version: 1.0.6

Release Date:

partialLeastSquares

nz.ac.waikato.cms.weka : partialLeastSquares

This package contains a filter for computing partial least squares and transforming the input data into the PLS space. It also contains a classifier for performing PLS regression.

Last Version: 1.0.5

Release Date:

predictiveApriori

nz.ac.waikato.cms.weka : predictiveApriori

Class implementing the predictive apriori algorithm for mining association rules. It searches with an increasing support threshold for the best 'n' rules concerning a support-based corrected confidence value. For more information see: Tobias Scheffer: Finding Association Rules That Trade Support Optimally against Confidence. In: 5th European Conference on Principles of Data Mining and Knowledge Discovery, 424-435, 2001.

Last Version: 1.0.4

Release Date:

prefuseGraph

nz.ac.waikato.cms.weka : prefuseGraph

A visualization component for displaying tree structures from those schemes that can output graphs (e.g. bayes nets). This component is available from the popup menu in the Explorer's classify. The component uses the prefuse visualization library.

Last Version: 1.0.4

Release Date:

prefuseTree

nz.ac.waikato.cms.weka : prefuseTree

A visualization component for displaying tree structures from those schemes that can output trees (e.g. decision tree learners, Cobweb clusterer etc.). This component is available from the popup menu in the Explorer's classify and cluster panels. The component uses the prefuse visualization library.

Last Version: 1.0.3

Release Date:

CLOPE

nz.ac.waikato.cms.weka : CLOPE

Yiling Yang, Xudong Guan, Jinyuan You: CLOPE: a fast and effective clustering algorithm for transactional data. In: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, 682-687, 2002.

Last Version: 1.0.2

Release Date:

DMNBtext

nz.ac.waikato.cms.weka : DMNBtext

Class for building and using a Discriminative Multinomial Naive Bayes classifier. For more information see: Jiang Su,Harry Zhang,Charles X. Ling,Stan Matwin: Discriminative Parameter Learning for Bayesian Networks. In: ICML 2008', 2008.

Last Version: 1.0.2

Release Date:

DTNB

nz.ac.waikato.cms.weka : DTNB

Class for building and using a decision table/naive bayes hybrid classifier. At each point in the search, the algorithm evaluates the merit of dividing the attributes into two disjoint subsets: one for the decision table, the other for naive Bayes. A forward selection search is used, where at each step, selected attributes are modeled by naive Bayes and the remainder by the decision table, and all attributes are modelled by the decision table initially. At each step, the algorithm also considers dropping an attribute entirely from the model. For more information, see: Mark Hall, Eibe Frank: Combining Naive Bayes and Decision Tables. In: Proceedings of the 21st Florida Artificial Intelligence Society Conference (FLAIRS), 318-319, 2008.

Last Version: 1.0.3

Release Date:

DilcaDistance

nz.ac.waikato.cms.weka : DilcaDistance

This package implements the parameter free version of the DILCA distance. This approach allows to learn value-to-value distances between each pair of values for each attribute of the dataset. The distance between two values is computed indirectly based on the their distribution w.r.t. a set of related attributes (the context) carefully chosen.

Last Version: 1.0.1

Release Date:

EMImputation

nz.ac.waikato.cms.weka : EMImputation

Replaces missing numeric values using Expectation Maximization with a multivariate normal model. Described in " Schafer, J.L. Analysis of Incomplete Multivariate Data, New York: Chapman and Hall, 1997."

Last Version: 1.0.2

Release Date:

J48graft

nz.ac.waikato.cms.weka : J48graft

Class for generating a grafted (pruned or unpruned) C4.5 decision tree. For more information, see Geoff Webb: Decision Tree Grafting From the All-Tests-But-One Partition.

Last Version: 1.0.3

Release Date:

LibLINEAR

nz.ac.waikato.cms.weka : LibLINEAR

A wrapper class for the liblinear tools (the liblinear classes, typically the jar file, need to be in the classpath to use this classifier). Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, Chih-Jen Lin (2008). LIBLINEAR - A Library for Large Linear Classification.

Last Version: 1.9.7

Release Date:

NNge

nz.ac.waikato.cms.weka : NNge

Nearest-neighbor-like algorithm using non-nested generalized exemplars (which are hyperrectangles that can be viewed as if-then rules). For more information, see Brent Martin (1995). Instance-Based learning: Nearest Neighbor With Generalization. Hamilton, New Zealand. Sylvain Roy (2002). Nearest Neighbor With Generalization. Christchurch, New Zealand.

Last Version: 1.0.2

Release Date:

RBFNetwork

nz.ac.waikato.cms.weka : RBFNetwork

RBFNetwork implements a normalized Gaussian radial basisbasis function network. It uses the k-means clustering algorithm to provide the basis functions and learns either a logistic regression (discrete class problems) or linear regression (numeric class problems) on top of that. Symmetric multivariate Gaussians are fit to the data from each cluster. If the class is nominal it uses the given number of clusters per class. RBFRegressor implements radial basis function networks for regression, trained in a fully supervised manner using WEKA's Optimization class by minimizing squared error with the BFGS method. It is possible to use conjugate gradient descent rather than BFGS updates, which is faster for cases with many parameters, and to use normalized basis functions instead of unnormalized ones.

Last Version: 1.0.8

Release Date:

SPegasos

nz.ac.waikato.cms.weka : SPegasos

Implements the stochastic variant of the Pegasos (Primal Estimated sub-GrAdient SOlver for SVM) method of Shalev-Shwartz et al. (2007). This implementation globally replaces all missing values and transforms nominal attributes into binary ones. It also normalizes all attributes, so the coefficients in the output are based on the normalized data. Can either minimize the hinge loss (SVM) or log loss (logistic regression). For more information, see S. Shalev-Shwartz, Y. Singer, N. Srebro: Pegasos: Primal Estimated sub-GrAdient SOlver for SVM. In: 24th International Conference on MachineLearning, 807-814, 2007.

Last Version: 1.0.2

Release Date:

SVMAttributeEval

nz.ac.waikato.cms.weka : SVMAttributeEval

Evaluates the worth of an attribute by using an SVM classifier. Attributes are ranked by the square of the weight assigned by the SVM. Attribute selection for multiclass problems is handled by ranking attributes for each class seperately using a one-vs-all method and then "dealing" from the top of each pile to give a final ranking. For more information see: I. Guyon, J. Weston, S. Barnhill, V. Vapnik (2002). Gene selection for cancer classification using support vector machines. Machine Learning. 46:389-422.

Last Version: 1.0.2

Release Date:

WekaExcel

nz.ac.waikato.cms.weka : WekaExcel

WekaExcel adds support to directory read from and write to spreadsheets in Microsoft Excel 97-2007 format. It uses Apache POI (http://poi.apache.org/), specifically POI-HSSF and POI-XSSF (http://poi.apache.org/spreadsheet/), in order to read/write Excel spreadsheets.

Last Version: 1.0.8

Release Date:

WekaODF

nz.ac.waikato.cms.weka : WekaODF

WekaODF adds support to directory read from and write to spreadsheets in ODF (Open Document Format for Office Applications, ISO/IEC 26300:2006) format. ODF is used by the OpenOffice.org suite, for instance. WekaODF uses jOpenDocument (http://www.jOpenDocument.org, GPL) in order to read/write ODF spreadsheets.

Last Version: 1.0.4

Release Date:

alternatingDecisionTrees

nz.ac.waikato.cms.weka : alternatingDecisionTrees

Binary-class and multi-class alternating decision trees. For more information see: Freund, Y., Mason, L.: The alternating decision tree learning algorithm. In: Proceeding of the Sixteenth International Conference on Machine Learning, Bled, Slovenia, 124-133, 1999. Geoffrey Holmes, Bernhard Pfahringer, Richard Kirkby, Eibe Frank, Mark Hall: Multiclass alternating decision trees. In: ECML, 161-172, 2001.

Last Version: 1.0.5

Release Date:

alternatingModelTrees

nz.ac.waikato.cms.weka : alternatingModelTrees

Grows an alternating model tree by minimising squared error. For more information see "Eibe Frank, Michael Mayo, Stefan Kramer: Alternating Model Trees. In: Proceedings of the ACM Symposium on Applied Computing, Data Mining Track, 2015".

Last Version: 1.0.0

Release Date:

associationRulesVisualizer

nz.ac.waikato.cms.weka : associationRulesVisualizer

visualization component for displaying association rules that uses a modified version of the Association Rules Viewer from DESS IAGL of Lille. Requires Java 3D to be installed.

Last Version: 1.0.2

Release Date:

attributeSelectionSearchMethods

nz.ac.waikato.cms.weka : attributeSelectionSearchMethods

This package provides four search methods for attribute selection: ExhaustiveSearch, GeneticSearch, RandomSearch and RankSearch. See: David E. Goldberg (1989). Genetic algorithms in search, optimization and machine learning. Addison-Wesley. Mark Hall, Geoffrey Holmes (2003). Benchmarking attribute selection techniques for discrete class data mining. IEEE Transactions on Knowledge and Data Engineering. 15(6):1437-1447.

Last Version: 1.0.7

Release Date:

averagedOneDependenceEstimators

nz.ac.waikato.cms.weka : averagedOneDependenceEstimators

AODE achieves highly accurate classification by averaging over all of a small space of alternative naive-Bayes-like models that have weaker (and hence less detrimental) independence assumptions than naive Bayes. The resulting algorithm is computationally efficient while delivering highly accurate classification on many learning tasks. For more information, see G. Webb, J. Boughton, Z. Wang (2005). Not So Naive Bayes: Aggregating One-Dependence Estimators. Machine Learning. 58(1):5-24.

Last Version: 1.2.1

Release Date:

bayesianLogisticRegression

nz.ac.waikato.cms.weka : bayesianLogisticRegression

Implements Bayesian Logistic Regression for both Gaussian and Laplace Priors. For more information, see Alexander Genkin, David D. Lewis, David Madigan (2004). Large-scale bayesian logistic regression for text categorization.

Last Version: 1.0.5

Release Date:

bestFirstTree

nz.ac.waikato.cms.weka : bestFirstTree

Class for building a best-first decision tree classifier. This class uses binary split for both nominal and numeric attributes. For missing values, the method of 'fractional' instances is used. For more information, see: Haijian Shi (2007). Best-first decision tree learning. Hamilton, NZ. Jerome Friedman, Trevor Hastie, Robert Tibshirani (2000). Additive logistic regression : A statistical view of boosting. Annals of statistics. 28(2):337-407.

Last Version: 1.0.4

Release Date:

cascadeKMeans

nz.ac.waikato.cms.weka : cascadeKMeans

k-means clustering with automatic selection of k. Restarts k-means and selects the best k using the Calinski and Harabasz criterion, without cross-validation.

Last Version: 1.0.4

Release Date:

chiSquaredAttributeEval

nz.ac.waikato.cms.weka : chiSquaredAttributeEval

Evaluates the worth of an attribute by computing the value of the chi-squared statistic with respect to the class.

Last Version: 1.0.4

Release Date:

citationKNN

nz.ac.waikato.cms.weka : citationKNN

Modified version of the Citation kNN multi instance classifier. For more information see: Jun Wang, Zucker, Jean-Daniel: Solving Multiple-Instance Problem: A Lazy Learning Approach. In: 17th International Conference on Machine Learning, 1119-1125, 2000.

Last Version: 1.0.2

Release Date:

classAssociationRules

nz.ac.waikato.cms.weka : classAssociationRules

Class association rules algorithms (including an implementation of the CBA algorithm). For more information see: W. Li, J. Han, J.Pei: CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules. In ICDM'01:369-376,2001. B. Liu, W. Hsu, Y. Ma: Integrating Classification and Association Rule Mining. In KDD'98:80-86,1998.

Last Version: 1.0.3

Release Date:

classificationViaClustering

nz.ac.waikato.cms.weka : classificationViaClustering

A simple meta-classifier that uses a clusterer for classification. For cluster algorithms that use a fixed number of clusterers, like SimpleKMeans, the user has to make sure that the number of clusters to generate are the same as the number of class labels in the dataset in order to obtain a useful model. Note: at prediction time, a missing value is returned if no cluster is found for the instance. The code is based on the 'clusters to classes' functionality of the weka.clusterers.ClusterEvaluation class by Mark Hall.

Last Version: 1.0.7

Release Date:

clojureClassifier

nz.ac.waikato.cms.weka : clojureClassifier

Wrapper classifier for classifiers written in the Clojure language.

Last Version: 1.0.1

Release Date:

complementNaiveBayes

nz.ac.waikato.cms.weka : complementNaiveBayes

Class for building and using a Complement class Naive Bayes classifier. For more information see: Jason D. Rennie, Lawrence Shih, Jaime Teevan, David R. Karger: Tackling the Poor Assumptions of Naive Bayes Text Classifiers. In: ICML, 616-623, 2003. P.S.: TF, IDF and length normalization transforms, as described in the paper, can be performed through weka.filters.unsupervised.StringToWordVector.

Last Version: 1.0.3

Release Date:

conjunctiveRule

nz.ac.waikato.cms.weka : conjunctiveRule

This class implements a single conjunctive rule learner that can predict for numeric and nominal class labels. A rule consists of antecedents "AND"ed together and the consequent (class value) for the classification/regression. In this case, the consequent is the distribution of the available classes (or mean for a numeric value) in the dataset. If the test instance is not covered by this rule, then it's predicted using the default class distributions/value of the data not covered by the rule in the training data.This learner selects an antecedent by computing the Information Gain of each antecendent and prunes the generated rule using Reduced Error Prunning (REP) or simple pre-pruning based on the number of antecedents. For classification, the Information of one antecedent is the weighted average of the entropies of both the data covered and not covered by the rule. For regression, the Information is the weighted average of the mean-squared errors of both the data covered and not covered by the rule. In pruning, weighted average of the accuracy rates on the pruning data is used for classification while the weighted average of the mean-squared errors on the pruning data is used for regression.

Last Version: 1.0.4

Release Date:

consistencySubsetEval

nz.ac.waikato.cms.weka : consistencySubsetEval

Evaluates the worth of a subset of attributes by the level of consistency in the class values when the training instances are projected onto the subset of attributes. The consistency of any subset can never be lower than that of the full set of attributes, hence the usual practice is to use this subset evaluator in conjunction with a Random or Exhaustive search which looks for the smallest subset with consistency equal to that of the full set of attributes. See: H. Liu, R. Setiono: A probabilistic approach to feature selection - A filter solution. In: 13th International Conference on Machine Learning, 319-327, 1996.

Last Version: 1.0.4

Release Date:

costSensitiveAttributeSelection

nz.ac.waikato.cms.weka : costSensitiveAttributeSelection

This package provides two meta attribute selection evaluators - one for performing cost-sensitive attribute evaluation (CostSensitiveAttributeEval) and a second for performing cost-sensitive subset evaluation (CostSensitiveSubsetEval). Both methods take a cost matrix and a base evaluator. If the base evaluator can handle instance weights, then the training data is weighted according to the cost matrix, otherwise the training data is sampled according to the cost matrix.

Last Version: 1.0.3

Release Date:

dagging

nz.ac.waikato.cms.weka : dagging

This meta classifier creates a number of disjoint, stratified folds out of the data and feeds each chunk of data to a copy of the supplied base classifier. Predictions are made via majority vote, since all the generated base classifiers are put into the Vote meta classifier. Useful for base classifiers that are quadratic or worse in time behavior, regarding number of instances in the training data. For more information, see: Ting, K. M., Witten, I. H.: Stacking Bagged and Dagged Models. In: Fourteenth international Conference on Machine Learning, San Francisco, CA, 367-375, 1997.

Last Version: 1.0.3

Release Date: