IRAQI JOURNAL OF STATISTICAL SCIENCESIRAQI JOURNAL OF STATISTICAL SCIENCES
https://stats.mosuljournals.com/
Thu, 06 Aug 2020 23:16:52 +0100FeedCreatorIRAQI JOURNAL OF STATISTICAL SCIENCES
https://stats.mosuljournals.com/
Feed provided by IRAQI JOURNAL OF STATISTICAL SCIENCES. Click to visit.Use the PSO algorithm to estimate the Cox process parameters.
https://stats.mosuljournals.com/article_165442_13096.html
Abstract In this paper it was proposed to use the PSO algorithm to estimate the time interval of the Cox process. The results of the proposed method of estimation were compared with the maximum likelihood estimator of estimating the rate of occurrence. The research included a realistic application in which the successive operating periods between two consecutive stops of the raw material mill in per day (from the General Company for Northern Cement / Cement Badush new plant) during the period from 1/4/2018 to 31/1/2019. The average time for machine operating periods was estimated by the methods proposed for use in the research,Sun, 31 May 2020 19:30:00 +0100New approach for data encrypted and hiding by EMD method
https://stats.mosuljournals.com/article_165443_13096.html
The aim of the research is to encrypt the input text using a suggested encryption method called Merge Substitution Transposition (MST) based on a proposed table of characters in order to integrate the compensation and replacement methods to encrypt the input text and use the Exploiting Modification Direction (EMD) method, which is a modern method and adopted as an efficient method of concealment in order to hide inside the image and send it to the receiver's hand. The system then acts as a receiver for the sent image to Embedding decoder, receive encrypted text, perform the decryption process and obtain the original text. The RGB image system was applied and the proposed algorithm was applied using Matlab language. The proposed algorithm achieved good results by using the PSNR, MSE efficiency measures on the color images and retrieval all the original text. Sun, 31 May 2020 19:30:00 +0100Using Logistic Regression with Time-Stratified Method for Air Pollution Datasets Forecasting
https://stats.mosuljournals.com/article_165444_13096.html
Abstract Particular matter (PM10) studying and forecasting is necessary to control and reduce the damage of environment and human health. There are many pollutants as sources of air pollution may effect on PM10 variable. Studied datasets have been taken from the Kuala Lumpur meteorological station, Malaysia. Logistic regression (LR) is built by using generalized linear model as a special case of linear statistical methods, therefore it may reflect inaccurate results when used with nonlinear datasets. Time stratified (TS) method in different styles is proposed for satisfying more homogeneity of datasets. It includes ordering similar seasons in different years together to formulate new variable smoother than their original. The results of LR model in this study reflect outperforming for time stratified datasets comparing to full dataset. In conclusion, LR forecasting can be depended after datasets time stratifying to satisfy more accuracy with nonlinear multivariate datasets in which PM10 is to dependent variable. Sun, 31 May 2020 19:30:00 +0100Comparisons between Logistic Regression and Support Vector Machine for Air Pollution Datasets ...
https://stats.mosuljournals.com/article_165445_13096.html
Abstract Particular matter (PM10) studying and forecasting is necessary to control and reduce the damage of environment and human health. There are many pollutants as sources of air pollution (Co, So2, O 3, Nox, No, Wind Speed, and Ambient Temperature) may effect on PM10 variable. PM10 and the pollutant variables have been taken from the meteorological station in Kuala Lumpur, Malaysia. All of these variables classified as nonlinear data. Logistic regression (LR) model can be used for modeling and forecasting these multivariable datasets. LR is one of linear statistical methods, therefore it may reflect inaccurate results when used with nonlinear datasets. To improve the results of forecasting, support vector machine (SVM) method has been suggested in this study. The results in this study reflect outperforming for SVM method comparing to LR. In conclusion, SVM forecasting can be used for more accuracy with nonlinear multivariate datasets when PM10 is as dependent variable. Keywords: Logistic Regression (LR), Support Vector Machine (SVM), Particular Matter (PM10), Forecasting, Air PollutionSun, 31 May 2020 19:30:00 +0100Inverse Generalized Gamma Distribution with it's properties
https://stats.mosuljournals.com/article_165446_13096.html
Abstract: In this paper, we introduce a new life time distribution . This distribution based on the reciprocal of Generalized Gamma (GG) random variable . This new distribution is called the Inverse Generalized Gamma (IGG) Distribution in which some of the inverse distributions are special cases. The important benefit of this distribution is ability to fit skewed data that cannot be fitted accurately by many other ungeneralized life time distributions. This distribution has many applications in pollution data ,engineering ,Biological fields and reliability. Some theoretical properties of the distribution has been studied such as: moments, mode, median and other properties. It is concluded that the distribution is skew with heavy tail and the skewness increased when the shape parameters increased but the scale parameter has no effect on the skewness and kurtosis. Sun, 31 May 2020 19:30:00 +0100Bayesian estimation for Life-Time distribution parameter under Compound Loss Function with ...
https://stats.mosuljournals.com/article_165447_13096.html
Abstract: This research aims to find Bayes estimator under symmetric and asymmetric two loss functions, such as the squared Log error loss function and entropy loss function, as well as a loss function that combines these two functions. It's called compound loss function, which is asymmetric in nature. A comparison of the Bayes estimators for scale parameter of Life-Time distribution, which includes a collection of known distributions under the compound proposed loss function, and its contained loss functions as well as the estimation of optimal sample size. Using a mean square error criterion (MSE), where the generation of the random data using the simulation for estimate Weibull distribution parameters that represents a special case of Life-Time distribution different sample sizes (n=10,50,100) and (N=1000), taking initial values for the parameters , to get to the balanced estimator that add between two loss functions.Sun, 31 May 2020 19:30:00 +0100Multicollinearity in Logistic Regression Model -Subject Review-
https://stats.mosuljournals.com/article_165448_13096.html
Abstract: The logistic regression model is one of the modern statistical methods developed to predict the set of quantitative variables (nominal or monotonous), and it is considered as an alternative test for the simple and multiple linear regression equation as well as it is subject to the model concepts in terms of the possibility of testing the effect of the overall pattern of the group of independent variables on the dependent variable and in terms of its use For concepts of standard matching criteria, and in some cases there is a correlation between the explanatory variables which leads to contrast variation and this problem is called the problem of Multicollinearity. This research included an article review to estimate the parameters of the logistic regression model in several biased ways to reduce the problem of multicollinearity between the variables. These methods were compared through the use of the mean square error (MSE) standard. The methods presented in the research have been applied to Monte Carlo simulation data to evaluate the performance of the methods and compare them, as well as the application to real data and the simulation results and the real application that the logistic ridge estimator is the best of other method. Sun, 31 May 2020 19:30:00 +0100Building Discriminate Function-Review
https://stats.mosuljournals.com/article_165449_13096.html
Abstract Discriminant Analysis has been widely used to classify data into subgroups based on certain criteria. The classification process depends on choosing any variable that shows a statistical significance, then use the selected variables to build the discriminant function. In order to investigate the statistical significance of the variables in our data, we used Roy-Bose procedure for finding confidence intervals and t-test, which is one of the popular variable-selection methods in discriminant analysis. In addition, some other variable-selection techniques has been employed, namely, Forward-Selection, Backward-Selection, and Stepwise-Selection methods, which are usually used to select variables in linear regression analysis. Furthermore, a principal component analysis has been carried out for the purpose of choosing the variables with high statistical significance. The selected variables have been used to build the discriminant function.Sun, 31 May 2020 19:30:00 +0100