Data Mining – Ripper Algorithm. Research and answer the questions. Submit responses in a separate document. Be sure to label questions correctly. Choose 4 of the 5 problems.

1. The RIPPER algorithm (by Cohen [1]) is an extension of an earlier algorithm called IREP (by Furnkranz and Widmer). Both algorithms apply the **reduced-error pruning **method to determine whether a rule needs to be pruned. The reduced error pruning method uses a validation set to estimate the generalization error of a classifier. Consider the following pair of rules:

*R*1: *A *−→ *C*

*R*2: *A *∧ *B *−→ *C*

*R*2 is obtained by adding a new conjunct, *B*, to the left-hand side of *R*1. For this question, you will be asked to determine whether *R*2 is preferred over *R*1 from the perspectives of rule-growing and rule-pruning. To determine whether a rule should be pruned, IREP computes the following measure:

*,*

where *P *is the total number of positive examples in the validation set, *N *is the total number of negative examples in the validation set, *p *is the number of positive examples in the validation set covered by the rule, and *n *is the number of negative examples in the validation set covered by the rule. *vIREP *is actually similar to classification accuracy for the validation set. IREP favors rules that have higher values of *vIREP*. On the other hand, RIPPER applies the following measure to determine whether a rule should be pruned:

*.*

Do a, b, and c below:

(a) Suppose *R*1 is covered by 350 positive examples and 150 negative examples, while *R*2 is covered by 300 positive examples and 50 negative examples. Compute the FOIL’s information gain for the rule *R*2 with respect to *R*1.

(b) Consider a validation set that contains 500 positive examples and 500 negative examples. For *R*1, suppose the number of positive examples covered by the rule is 200, and the number of negative examples covered by the rule is 50. For *R*2, suppose the number of positive examples covered by the rule is 100 and the number of negative examples is 5. Compute *vIREP *for both rules. Which rule does IREP prefer?

(c) Compute *vRIPPER *for the previous problem. Which rule does RIPPER prefer?

2. C4.5rules is an implementation of an indirect method for generating rules from a decision tree. RIPPER is an implementation of a direct method for generating rules directly from data. (Do both a & b below)

(a) Discuss the strengths and weaknesses of both methods.

(b) Consider a data set that has a large difference in the class size (i.e.,some classes are much bigger than others). Which method (between C4.5rules and RIPPER) is better in terms of finding high accuracy rules for the small classes?

3. Consider a training set that contains 100 positive examples and 400 negative examples. For each of the following candidate rules (**Optional Extra Credit Question**),

*R*1: *A *−→ + (covers 4 positive and 1 negative examples), *R*2: *B *−→ + (covers 30 positive and 10 negative examples),

*R*3: *C *−→ + (covers 100 positive and 90 negative examples), determine which is the best and worst candidate rule according to:

(a) Rule accuracy. (optional extra credit, +2)

.

(b) FOIL’s information gain. (optional extra credit, +2)

(c) The likelihood ratio statistic. (optional extra credit, +2)

(d) The Laplace measure (optional extra credit, +2)

(e) The m-estimate measure (with *k *= 2 and *p*+ = 0*.*2). (optional extra credit, +2)

4. Figure 1 below illustrates the Bayesian belief network for the data set shown in Table 1. (Assume that all the attributes are binary) Solve a & b below.

Mileage

Engine

Car

Value

Air

Conditioner

**Figure 1: **Bayesian belief network.

**Table 1: **Data set for question.

Mileage | Engine | Air Conditioner | Number of Records | Number of Records |

with Car Value=Hi | with Car Value=Lo | |||

Hi | Good | Working | 3 | 4 |

Hi | Good | Broken | 1 | 2 |

Hi | Bad | Working | 1 | 5 |

Hi | Bad | Broken | 0 | 4 |

Lo | Good | Working | 9 | 0 |

Lo | Good | Broken | 5 | 1 |

Lo | Bad | Working | 1 | 2 |

Lo | Bad | Broken | 0 | 2 |

(a) Draw the probability table for each node in the network.

(b) Use the Bayesian network to compute P(Engine = Bad, Air Conditioner = Broken).

5. Given the Bayesian network shown in below, compute the following probabilities (a,b, & c below):

Battery

Gauge

Start

Fuel

P(B = bad) = 0.1

P(F = empty) = 0.2

P(G = empty | B = good, F = not empty) = 0.1

P(G = empty | B = good, F = empty) = 0.8

P(G = empty | B = bad, F = not empty) = 0.2

P(G = empty | B = bad, F = empty) = 0.9

P(S = no | B = good, F = not empty) = 0.1

P(S = no | B = good, F = empty) = 0.8

P(S = no | B = bad, F = not empty) = 0.9 P(S = no | B = bad, F = empty) = 1.0

**Figure: **Bayesian belief network

(a) *P*(B = good, F = empty, G = empty, S = yes)

(b) *P*(B = bad, F = empty, G = not empty, S = no).

(c) Given that the battery is bad, compute the probability that the car will start.