=================================================================== Speaker: David Villacis Title: Solving Bilevel Optimization Problems in Image Processing via Inexact Trut Region Methods Abstract: For image denoising problems we consider the numerical solution of a total variation model with parameter learning. The parameter learning is based on training samples obtained from a PDE optimization problem. We propose an Inexact Trust Region Algorithm and compare its performance with Newtonian approaches of the semi-smooth and quasi-Newton types. =================================================================== Speaker: Mauricio Jose Junca Pelaez Title: On de Finetti's problem under a time of ruin constraint Abstract: We consider the classic de Finetti's problem when the reserves are assumed to follow a spectrally negative Levy process subject to a constraint on the time of ruin. We introduce the dual problem and show that the complementary slackness condition is satisfied, thus there is no duality gap. Therefore the optimal value function can be obtained as the point-wise infimum of auxiliary value functions indexed by Lagrange multipliers. We also present a series of numerical examples. Joint work with Camilo Hernández =================================================================== Speaker: Philip Thompson Title: Incremental constraint projection methods for monotone stochastic variational inequalities Abstract: We consider stochastic variational inequalities with monotone operators defined as the expected value of a random operator. We also assume the feasible set is the intersection of a large family of convex sets. We propose a method that combines stochastic approximation with incremental constraint projections meaningthat at each iteration, a step similar to some variant of a deterministic projection method is taken afterthe random operator is sampled and a component of the intersection defining the feasible set is chosen at random. Such sequential scheme is well suited for applications involving large data sets, online optimizationand distributed learning. In the first part we assume the variational inequality is weak sharp. We provideconvergence and complexity analysis and estimate the minimum number of iterations for any solution of astochastic program with linear objective to solve the variational inequality. In a second part we introduce anexplicit iterative Tykhonov regularization to the method and prove its convergence requiring just monotonic-ity. We consider Cartesian variational inequalities so as to encompass the distributed solution of monotone stochastic Nash games or multi-agent optimization problems under a limited coordination between agents.