The Back Propagation Algorithm is used for training feed Forward Multilayer Neural Networks (FFMNN).But often this algorithm takes long time to converge since it may fall into local minimu, for this reason we need a long time to train the network. The suitable choice of the learning rate helps us to escape from slow convergent for the BP and reduce the time of learning. In this paper, we derived a new adaptive learning rate for the BP algorithm, our derivation is based on the Aitkin's process. The most important distinct feature of our approach is the computing of the learning rate needs only first order derivatives and is suitable for large training sets and large networks. Its efficiency is proved on the standard test functions including heart , XOR and function approximation problems .