As no one decision tree building method (or, for that
matter, machine learning method) is the best for all datasets, we feel
that a machine learning researcher/practitioner should experiment with
as many methods as possible when attempting to solve a
---S.K. Murthy, S. Kasif, S. Salzberg, README for OC1
MLC++ supports many inducers (induction algorithms), but there is an important dichotomy. The first inducer type is called a (regular) inducer and it must be implemented in MLC++ itself. The second type is called a base inducer and can either be implemented in MLC++ or it can be an external inducer. A base inducer cannot categorize specific instances, only a set of instances. All external inducers, which are interfaced through MLC++ \ ( e.g. , C4.5, PEBLS, aha-IB, and OC1, T2) are base inducers. Base inducers are given the training set and test set and return the accuracy. Some MLC++ algorithms are also base inducers or may behave like such under certain conditions. For example, if the FSS (feature subset selection) inducer option SHOW_REAL_ACC is not ``never,'' then FSS behaves like a base inducer because it must have access to the test set to display the real accuracy as it progresses (this accuracy is not used in the induction process; it is only used for display purposes). Besides the technical details, some wrappers ( e.g. , bagging) only support operations on regular inducers. Confusion matrices in the ``Inducer'' utility are an option provided only for regular inducers. We now describe the available inducers and their options.