Bayesian Classifier for a Gaussian Distribution, Decision Surface Equation, with Application

Bayesian decision theory is a fundamental statistical approach to the problem of classification as for pattern recognition. It makes the assumption that the decision problem is posed in probabilistic term, and all of the relevant probability values are known. To minimize the error probability in classification problem, one must choose the state of nature that maximizes the posterior probability. Bayes formula allows us to calculate such probabilities given the prior probabilities

have the same distribution with same parameters values).This was done as a special case by Muller, P. & Insua, D.R, (1995), This study is a trial to generalize (Bayesian Surface Decision Equation) to produce a such (Linear Separable Equation) as a linear classifier for those classes contain random vectors distributed identically Gaussian) but with different parameters values ( m i , Σ i ), moreover, in this study the researcher tried to search for equivalence between Bayesian Surface Decision Equation, and a linear perceptron for classification for this general case, with a numeric application, (encoded data vectors) that is illustrated latter.

1-Introduction
Classification is a statistical task of learning a target function that maps each vector( X ), or set to one of the predefined class labels(k i ), the target function is also known informally as a classification model which is useful for descriptive modeling which can serve as an explanatory tool to distinguish between objects of different classes, or for predictive model which is used to predict the class label of unknown or records never seen before.Classification model can be treated as a black box that automatically assigns a class label when presented with the attribute unknown set or vector of random variables.categorical data, then classification is a systematic approach for building classification models for an input data records.
Classification techniques employ a learning algorithm to identify a model that best fits the relationship between attribute set and class label for the input data, then the fitted classifier from the algorithm should both fit the input data well, and correctly predict the class labels of records that have never seen before.
Therefore a key objective of classification is to build model that accurately predict the class labels or patterns of previously unknown records.
Its clear that the Bayesian Decision Rule(BDR) has great role in statistical data analysis for various directions in our live, specially in stochastic processes, it is also a good tool if it is used in classification to identify and to compare several groups or statistical populations(classes), and also taking classifier knowledge about the source of observations that they come from, by identifying the behaviors of random variables and playback or replacing them to their origin sources(behaviors of classes, or statistical distributions must be known previously).This can be done when one needs to make classification among some classes come from several populations, to return to their origin.And also this criteria can be done by using what is so called (Perceptron), or generally (linear ________________ Bayesian Classifier for a Gaussian … [40] perceptron).Then it must have a similarity or in convenience to a mathematical equivalence form between (BSDE), and the perceptron.

1-1 The aim of the study
The aim of this study is trying to establish the following two important points: 1 st / Making sure, about the equivalence between single layer perceptron, and the difference between two (BDR) for two classes ].This difference is called (Bayesian Surface Decision Equation (BSDE), or a Bayesian decision boundary that is evaluated from two independent but not identical Gaussian classes (multivariate normal density).ie. each class contains a set of vector of random variables with Gaussian distribution , but with different mean vector and positive simi-definite variance-covariance matrix as shown below.Such that: P = maximum number of random variates in class (k i ), i = 1, 2 µ p = mean vector , and Σ = variance-covariance matrix

1-2 Hypothesis of the Study
The above illustrated two points in the aim of the study can be suggested to be the main hypotheses that must be studied.The diagram above can be expressed by the following system:

2-2 Bayesian Decision Rule (BDR) from Gaussian distribution.
As a special case of Gaussian for n-class problem, Let us have a given random vector ( X ) with mean vector depends on whether it belongs to class (k i ) with non-diagonal positive semi-definite covariance matrix, means that the samples drawn from the classes are correlated as follow: For class k i ; E i ( X )= m i , and Such that Σ is the determinant of a non-singular matrix, then Σ -1 exists.
Now we can express the conditional probability density function of the random vector ( X ) as follows: The covariance matrix Σ i is symmetric and positive semi-definite matrix having always a positive determinant; the diagonal element σ kk is the variance of the k th element of the pattern vectors.The offdiagonal element is the covariance of (x j, and x k ) when they are stochastically independent σ jk =0 .The decision function for class ( k i )may be chosen as:

X p X k p k =
Since the normal density has an exponential form, then working with logarithmic function of this decision would be more convenient, in other world we may use the following form : [ ] ( ) log ( / ) ( ) log ( / ) log ( ) Substituting equation (2) in equation (3) yields: And also for class k j then we get the decision equation as follows: 1 Subtracting equation ( 5) from equation ( 4) yields Bayesian surface boundary decision equation given by: (1 / 2 ) 0 If the var-cov matrix for classes is common then: (1 / 2) 0 Now put i =1, and j =2 then the decision boundary (separable equation) between two classes in equation ( 6) becomes: Also, since the two classes have the same chance for appearance then: Then this quantity is ignored from the decision boundary or Bayesian surface decision between two classes and so the equation ( 8) is reduced to: Now recalling equation ( 1) for the perceptron's mathematical formula: Then if we suppose that the decision boundary above in equation ( 10)is the output for the perceptron or the network : Comparing equation ( 11) with surface decision boundary in equation ( 10) we conclude that: ________________ Bayesian Classifier for a Gaussian … [48] { } , and: The equation ( 13) states that the Byes classifier (BSDE) for two Gaussian classes is really perceptron comparing with formula in equation ( 1).

2-3 Minimum Error Rate Classification
Generally in Bayesian decision theory for two classes/categories classification let {k 1 , and k 2 } be two state of natures, {d 1 , d 2 } two possible actions, and γ(d i /k j ) for ( i, j = 1, 2) be the loss incurred for taking action (d i ) when state of nature is (k j ), then for a random vector ( X ) which so called feature vector the conditional risks can be written as : The minimum risk decision rule to decide (k i , i= 1 , 2) becomes : This corresponds to decide (k 1 ) if The left hand side of inequality ( 17) is a likelihood ratio (L 12 ), and the right hand side is a threshold (θ) that is independent of observations in the vector ( X ).Then: ________________ Bayesian Classifier for a Gaussian … [50] Now for a given vector( X ) which is required to classify, and calculating the threshold value (θ) from equation ( 14), also calculating posteriors from equation (2) to find (L 12 ) from relation (18), and at last comparing them one with each other.After achieving this comparison, it will be possible to take a minimum risk decision rule given by relation (17).

3-Application:
The application in this study is for making classification for two levels of income in Sulaimani (center) between two areas [(Azmar, class k 1 ), and Shorsh, class k 2 )] due to four variables as follows: X1: Average of income, X2: Average of persons/Family, X3: Average of expenditure percentage from income), and X4: average of workers/family.

3-1 Data Sampling
Cluster sampling technique has been used for collecting data, by dividing each area into (four) sectors.Each sector was divided into four lines.Five families (clusters) were selected randomly from each line.And the average for each variable that was described above was taken for these five families.So each sample containing (16 observations, each of them describes the four variables above, they came from the average of five families.

Iraqi Journal of Statistical Science (18) 2010 _______________ [51]
In order to achieve logical form for data, it was encoded to binary (0, 1) with respect to the arithmetic mean for each variable (if the value of the variable > mean, then it was encoded by 1, and otherwise was encoded by 0).

See the two following tables:
Table (1 In order to apply (BSDE) as a linear perceptron, for each two above classes (k 1 ), and (k 2 ), the following quantities must be computed: (1 / 16)( ) Now using these two values, the vector (W), and (θ ) that was calculated at last, and we suppose that we have a vector X T =[ After knowing a true value of the random vector and transforming it to binary data (encoded), then if (net ≥ 0) then the vector ( X ) is coming from class (k 1 ), and otherwise is from class (k 2 ).

3-2 Evaluation of classification
The major goal in (BDR), or generally in decision theory for classification, is to minimize the cost of classification, then to evaluate the result of classification that the chosen vector ( X ) is referring to one of two classes (k 1 , or k 2 ), one can calculate ( Likelihood ratio ) to test the power of classification, now in Bayesian classifier for a Gaussian environment, when it is compared with linear perceptron, let L 12 , see inequality( 16 ) be a likelihood ratio that the vector ( X ) is coming from class (k 1 ) but not from (k 2 ).

P X k P X k
And remember that (θ) was the threshold in expression (14) that has been calculated previously (θ = -0.28045).Now: the three following results can be observed: 1/ Assign the chosen vector ( X ) to class (k 1 ) if L 12 ( X ) > θ.
3/ But if L 12 ( X ) = θ, then an arbitrary decision could be made.
The assurance above is a test procedure of the fitness for classification model that may be decided by the investigator, as an additional comparison for that was done by using equation ( 14), and it gives the same result.

4-Results and conclusions
Through this study the following important results and conclusions are achieved: 1/ A generalized Bayesian surface decision equation(equation 12) was established as a modified classifier not only for(identically independent) vectors as it was suggested by Muller, P. & Insua, D.R, (1995),(equation 7) , but also for those which have identical distributions , specially in Gaussian case.

2/
The generalization established in equation ( 12) is also equivalent to the linear perceptron illustrated in equation (13).And it can be used as a linear separable function classifier to determine the source class of a given vector ( X ) taken among two separable classes.

Distribution 2 - 1
Single PerceptronIt is usually named by linear perceptron or simply perceptron which is defined as a (Linear Classifier, or linear Separable Equation LSE) having one input , and one output, these two layers and their nodes (Neurons) are connected with each other by a linear ________________ Bayesian Classifier for a Gaussian …[42]    function named by (hard limiter), through a random set of weights (w i ), as illustrated by the figure below.
Figure (1) : perceptron It is important to explain that the threshold (θ) is similar to the intercept in any linear system, and the net(y) is output produced by the system, at the same time ( y i )is the desired output that we search for.Now after transforming sets of data referring to two populations, to a logical form (AND, OR, XOR), and introduced it to the perceptron system as an encoded data patterns (binary data).Then, one can use it to separate the data input to their origin categories (classes) which are coming from, this can be done by finding a relationship called linear separable function.This process is indeed a classification, and the perceptron system is a classifier.
): First Encoded Sample(matrix) for Azmar ( class k 1 those variables described previously in data tables (1,and 2) without any knowledge for which class they were returned Class (a), or class (b), then by substituting these two values in equation (13) we get: