1 Introduction
Online Gradient Descent (OGD) has drawn much attention in the community of machine learning
Zhu and Xu (2015); Hazan and Seshadhri (2007); Hall and Willett (2015); ShalevShwartz (2012); Garber (2018); Bedi et al. (2018). It is widely used in various applications such as online recommendation Song et al. (2008), search ranking Moon et al. (2010). Generally, OGD is formulated as a game between a learner and an adversary. At the th round of the game, the learner submits from the feasible set , and the adversary selects a function . Then, the function is returned to the learner, and incurs the loss .Recently, there has been a surge of interest in analyzing OGD by using the dynamic regret Zinkevich (2003); Mokhtari et al. (2016); Yang et al. (2016); Lei et al. (2017). The dynamic regret is usually defined as
(1) 
where . Unfortunately, it is wellknown that a sublinear dynamic regret bound cannot be achieved in the worst case Zinkevich (2003). The reason is that the functions may be changed arbitrarily in the dynamic environment. But, it is possible to upper bound the dynamic regret in terms of certain regularity of the comparator sequence. Those regularities are usually defined as the path length Mokhtari et al. (2016); Yang et al. (2016):
or squared path length Zhang et al. (2017):
They capture the cumulative Euclidean norm or the square of Euclidean norm of the difference between successive comparators. When all the functions are strongly convex and smooth, the dynamic regret is bounded by Mokhtari et al. (2016). When the local variations are small, is much smaller than . Thus, the stateoftheart dynamic regret of OGD is improved to be Zhang et al. (2017).
Algo.  Obj. type  Dynamic regret  Avg. queries 

Mokhtari et al. (2016)  strongly convex  
Zhang et al. (2017)  strongly convex  
Ours  strongly convex 
But, to achieve the stateoftheart dynamic regret, i.e., , the variant of OGD in Zhang et al. (2017) has to query gradients for every iteration. Here, represents the condition number for the smooth and strongly convex objective function . For a large , the extremely large query complexity makes it not practical in the online setting. In the paper, we investigate the basic online gradient descent, and provide a new theoretical analysis framework. Using the new analysis framework, we show that the dynamic regret can be achieved with , instead of queries of gradients in Zhang et al. (2017). Main theoretical results are outlined in Table 1 briefly.
The improvement of the query complexity is vitally important for illconditioned^{1}^{1}1‘illconditioned’ may be notated by ‘illposed’ or ‘badly posed’ in some literatures. problems Tarantola (2004); Hansen et al. (2006); Marroquin et al. (1987) whose objective function usually has a large condition number, i.e., . Let us take the image deblurring problem as an example Hansen et al. (2006). Suppose we have a blurred image , which is modeled by using an unknown real image and a blurring matrix . That is, . Here, is usually a nonsingular matrix with a large condition number, e.g., . We want to recover the real image from the blurred image , that is, . Comparing with the method in Zhang et al. (2017), our new analysis framework shows that OGD is good enough, and the required queries of gradients can be reduced by multiple orders.
2 Related work
2.1 Regrets of OGD in the static environment.
Online gradient descent in the static environment has been extensively investigated over the last ten years. The sublinear static regrets for smooth or strongly convex functions have been obtained in many literatures ShalevShwartz (2012); Hazan (2016); Duchi et al. (2011); Zinkevich (2003). Specifically, when is strongly convex, the regret of online gradient descent is Hazan (2016). When is convex but not strongly convex, the regret of online gradient descent is Hazan (2016).
2.2 Regrets of OGD in the dynamic environment.
When all the functions are stronglyconvex and smooth, the dynamic regret of OGD is Mokhtari et al. (2016); Yang et al. (2016). If OGD queries gradients at every iteration, the dynamic regret of OGD can be improved to be Zhang et al. (2017). But, our analysis framework shows that the gradient queries for every iteration is enough to obtain dynamic regret. Additionally, there are some other regularities including the functional variation Zhu and Xu (2015); Besbes, Omar et al. (2015) and the gradient variation Chiang et al. (2012). Those regularities measure different aspects of the variation in the dynamic environment. Since they are not comparable directly, some researchers consider to bound the dynamic regret by using the mixed regularity Jadbabaie et al. (2015). Extending our theoretical framework to different regularities is an interesting avenue for future work.
3 Preliminaries
3.1 Notations and assumptions
We use the following notation.

The bold lowercase letters, e.g.,
represent vectors. The normal letters, e.g.,
represent a scalar number. 
represents the learning rate of Algorithm 1 at the th iteration, and .

The condition number is defined by for any smooth and strongly convex function .

represents the norm of a vector.

represents the projection to a set .

represents the minimizer set at the th iteration.

Bregman divergence is defined by for any function .
In the paper, functions are assumed to be convex and smooth (defined as follows).
Definition 1 ( smoothness).
A function is smooth, if, for any and , we have .
If the function is smooth, according to the definition of the Bregman divergence, we have holds for any and . The other assumptions used in the paper are presented as follows.
Assumption 1 ( strong convexity).
For any , the function is strongly convex. That is, for any and , .
Assumption 2 (Boundedness of gradients).
We assume for any .
Assumption 3 (Boundedness of the domain of ).
We assume for any .
The above assumptions, i.e., Assumptions 13, are the basic assumptions, which are used widely in previous researches ShalevShwartz (2012); Hazan (2016); Duchi et al. (2011); Zinkevich (2003). Additionally, we make the following assumption, which is used to model the dynamic environment.
The above assumptions, i.e., Assumptions 13, are the basic assumptions, which are used widely in previous researches ShalevShwartz (2012); Hazan (2016); Duchi et al. (2011); Zinkevich (2003). Additionally, we make the following assumption, which allows the environment to change within a range. It is a mild assumption for many tasks such as timeserise prediction Kuznetsov and Mohri (2016); Anava et al. (2013), traffic forecasting Buch et al. (2011), timevarying medical image analysis Wang et al. (2008); Lee and Shen (2009), online recommendation Chang et al. (2017).
Assumption 4 (Boundedness of variations in the dynamic environment.).
Denote . For any and , when and , there exists a constant such that .
3.2 Algorithm
Recall the algorithm of the OGD. At the th iteration, it submits
, and receives the loss function
. Querying the gradient of , it updates by using the projected gradient descent method. The details are presented in Algorithm 1.Comparing with the stateoftheart method, i.e., Algorithm 2, OGD only requires one query of gradient for every iteration, while Algorithm 2 requires queries of gradient. When is large, the query complexity of Algorithm 2 is much higher than OGD. Comparing with OMGD, i.e., Algorithm 2, our new theoretical analysis framework shows that OGD is good enough to recover the stateoftheart dyanmic regret yielded by OMGD, but it only leads to query of gradient, instead of queries of gradient required by OMGD.
4 A new theoretical analysis framework
In the section, we first provide a modular analysis framework, which does not depend on the assumption on the functions. Then, equipped with the strongly convex assumption, it yields specific results.
4.1 Highlevel thought
Our original goal is equivalent to investigate whether the basic OGD, i.e., Algorithm 1 can obtain the stateoftheart dynamic regret, i.e., . Using the divideandcontrol strategy, we divide the dynamic regret of OGD into two parts.

The first part, denoted by , is caused by the online setting in the dynamic environment. It does not depend on the strongly convex assumption on the function .

The second part, denoted by , is due to the projected gradient descent step in Algorithm 1. It depends on the assumption on the function such as convexity or strong convexity.
In the paper, our first contribution is to provide an upper bound of without the strongly convex assumption of . Then, benefiting from the rich theoretical tools in the static optimization, we successfully bound by using the strongly convex assumption of .
4.2 Meta framework
Generally, the dynamic regret of OGD is bounded as follows.
In Theorem 1, represents the regret due to the online setting, and represents the regret due to the projected gradient descent updating step in Algorithm 1.
Remark 1.
Note that the upper bound of depends on the strongly convex assumption of the function .
Theorem 2.
Remark 2.
Note that this upper bound of does not depend on the strongly convex assumption of the function . It still holds for the convex function .
5 Improved query complexity for strongly convex
When all ’s are smooth and strongly convex, the dynamic regret of our method OGD is upper bounded by the following theorem.
Theorem 3.
Corollary 1.
Proof.
Recall Assumption 3, and we have . When , we have . Similarly, we have . Thus, we finally obtain
It completes the proof.
∎
Recall the previous method, i.e., Algorithm 2. Its dynamic regret has been proved, and we present it as follows.
Lemma 2 (Appeared in Theorem and Corollary in Zhang et al. (2017).).
Comparing with Lemma 2, our new result achieves the same bound of the regret. But, OGD, i.e., Algorithm 1, only requires one query of gradient for every iteration, which does not depend on , and thus outperforms Algorithm 2 by reduing the query complexity significantly. The following remarks hightlight the advantages of our analysis framework.
Remark 3.
Remark 4.
Our analysis framework shows that queries of gradients for every iteration is enough to achieve the stateoftheart dynamic regret, but Zhang et al. (2017) requires queries of gradients for every iteration.
6 Conclusion
We provide a new theoretical analysis framework to analyze the regret and query complexity of OGD in the dynamic environment. Comparing with the previous work, our framework achieves the stateoftheart dynamic regret, and improve the required queries of gradient to be .
Proof of theorems.
Proof of Theorem 1:
Proof of Theorem 2:
Proof.
According to the cosine theorem, we have
(5) 
According to Lemma 1, if is convex and smooth, holds for . Specifically, holds when is strongly convex, and holds when is just convex. We thus have
Let , , and we thus have
that is, . Thus, we have
Summing up, we obtain
(6) 
holds due to letting .
Proof of Theorem 3:
Proof of lemmas.
Lemma 3.
Denote . If , we have
Proof.
Consider the following convex optimization problem
(12) 
Denote the optimum set is , that is, for any , holds.
According to the firstorder optimality condition Boyd and Vandenberghe (2004), we have, for any and ,
Comments
There are no comments yet.