Let (X,Y) be a random couple with distribution P, where X is an observation and Y -- a binary label to be predicted. In practice, distribution P remains unknown but the learning algorithm has access to the training data set - a sample from P. It often happens that the cost of obtaining the training data is associated with labeling the observations while the pool of observations itself is almost unlimited. This suggests to measure the performance of a learning algorithm in terms of its label complexity, the number of labels required to obtain a classifier of desired accuracy. Active Learning theory explores the possible advantages of this modified framework. We will present a new active learning algorithm based on nonparametric estimators of the regression function and explain main improvements over the previous work. Our investigation provides upper and minimax lower bounds for the performance of proposed method over a broad class of underlying distributions.