We consider a general formulation of the Principal-Agent problem from Contract Theory, on a finite horizon. We show how to reduce this non-zero sum Stackelberg stochastic differential game to a stochastic control problem which may be analyzed by the standard tools of control theory. In particular, Agent's value function appears naturally as a controlled state variable for the Principal's problem. Our argument relies on the Backward Stochastic Differential Equations approach to non-Markovian stochastic control, and more specifically, on the most recent extensions to the second order case.