Unveiling Insights from "Gradient Descent Converges Linearly for Logistic Regression on Separable Data"

Event Start
Event End
Location
Building 1, Level 0, Room 0118

Abstract

In this presentation, I will share a paper titled "Gradient Descent Converges Linearly for Logistic Regression on Separable Data", a work highly related to my ongoing research. I will explore its relevance to my current research topic and discuss the inspiration for our future works.


Abstract of the paper: 
We show that running gradient descent with variable learning rate guarantees loss f(x) \leq 1.1f(x^*)+\epsilon for the logistic regression objective, where the error \epsilon decays exponentially with the number of iterations and polynomially with the magnitude of the entries of an arbitrary fixed solution x^*. This is in contrast to the common intuition that the absence of strong convexity precludes linear convergence of first-order methods, and highlights the importance of variable learning rates for gradient descent. We also apply our ideas to sparse logistic regression, where they lead to an exponential improvement of the sparsity-error tradeoff.

Brief Biography

Bang An is a Ph.D. student in AMCS, CEMSE, KAUST, under supervison of Prof. Jinchao Xu.

Contact Person