We will talk about two topics at the intersection of optimization and probability, that involve, on the one hand introducing a probabilistic aspect into optimization and on the other, exploiting the optimization aspects of probability. In the first topic we consider affine variational inequalities -- these are deterministic problems that generalize convex optimization, Nash games and many other problems. We present a method to obtain fast approximate solutions of these problems by deliberate randomization of the given problem. Specifically, we suitably project these problems from a higher dimensional space to a lower dimensional space and use the lower dimensional problem to obtain an approximate solution to the higher dimensional problem. The approach involves exploiting the theory of concentration inequalities (specifically, the Johnson-Lindenstrauss lemma) and compressed sensing within the constraints imposed by the geometry of the problem, to provide high-quality guarantees on the approximation.In the second part we consider team problems -- these are stochastic multiagent optimization problems where each agent has access to different partial information of a random source. These problems capture a vast variety of problems spanning decentralized control, information theory, and team decision theory. Our contribution here is to cast these as deterministic optimization problems on the spaces of distributions and using optimization based ideas to bound and in some cases solve these problems. Joint work with Todd Coleman (UCSD), Bharat Prabhakar and Sharu Theresa Jose.