BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//5.981//EN
TZID:Asia/Jerusalem
X-WR-TIMEZONE:Asia/Jerusalem
BEGIN:VEVENT
UID:69@web.iem.technion.ac.il
DTSTART;TZID=Asia/Jerusalem:20210502T143000
DTEND;TZID=Asia/Jerusalem:20210502T153000
DTSTAMP:20210418T064518Z
URL:https://web.iem.technion.ac.il/site/he/iemevents/bilevel-optimization-
problems-methodology-and-first-order-methods-for-solving-convex-non-smooth
-and-non-strongly-convex-problems/
SUMMARY:Bilevel optimization problems- methodology and first-order methods
for solving convex non-smooth and non-strongly convex problems. [ \n G
raduate Student Seminar\n Seminars\n \n ]
DESCRIPTION:By: M.Sc. Lior Doron\n Advisors: Dr. Shimrit Shtern\n Where: ZO
OM From:\nTechnion\nAbstract:\nSimple bilevel problems are optimization pr
oblems in which we want to find an optimal solution to an inner problem th
at minimizes an outer objective function. Such problems appear in many mac
hine learning and signal processing applications as a way to eliminate und
esirable solutions. However\, since these problems do not satisfy regulari
ty conditions\, they are often hard to solve exactly and are usually solve
d via regularization (e.g LASSO and ridge regression). In the past few yea
rs\, several algorithms were proposed to solve these bilevel problems dire
ctly and provide a rate for obtaining feasibility\, assuming that the oute
r function is strongly convex. In our work\, we suggest a new approach tha
t is designed for bilevel problems with simple outer functions\, such as t
he l1 norm\, which are not required to be either smooth or strongly convex
. In our new Iterative Approximation and Level-set Expansion (ITALEX) appr
oach\, we alternate between expanding the level-set of the outer function
and approximately optimizing the inner function over this level- set. ITAL
EX guarantees that at each iteration of the algorithm the outer objective
function is super-optimal\, a property that is not known for other bilevel
algorithms. We show that optimizing the inner function through first-orde
r schemes such as proximal gradient and generalized conditional gradient r
esults in a feasibility convergence rate of O(1/k)\, which is a rate only
shown to be obtained for a smooth strongly convex outer functions. Assumin
g more restrictive settings which still holds for important cases (like Ba
sis Pursuit)\, it results a convergence rate of O(1/k^2).\n\nZoom Link\n\n
https://technion.zoom.us/j/3800541616
CATEGORIES:Graduate Student Seminar,Seminars
END:VEVENT
BEGIN:VTIMEZONE
TZID:Asia/Jerusalem
X-LIC-LOCATION:Asia/Jerusalem
BEGIN:DAYLIGHT
DTSTART:20210326T030000
TZOFFSETFROM:+0200
TZOFFSETTO:+0300
TZNAME:IDT
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR