We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.
Partial support by the Ministerio de Ciencia y Tecnología, project BEC 2002-00642, and by the Comissionat per Universitats i Recerca de la Generalitat de Catalunya, Grant SGR2001-00162, is gratefully acknowledged.