One of the key limitations of Molecular Dynamics simulations is the computational intractability of sampling protein conformational landscapes with either large system size or long timescales. To overcome this bottleneck, we present the REinforcement learning based Adaptive samPling (REAP) algorithm that aims to sample a landscape faster than conventional simulation methods by identifying reaction coordinates that are relevant for sampling the system. To achieve this, the algorithm uses concepts from the field of reinforcement learning (a subset of machine learning), which rewards sampling along important degrees of freedom and disregards others that do not facilitate exploration or exploitation. We demonstrate the effectiveness of REAP by comparing the sampling to long continuous MD simulations and least-counts adaptive sampling on two model landscapes (L-shaped and circular). We also demonstrate that the algorithm can be extended to more realistic systems such as alanine dipeptide and Src kinase. In all four systems, the REAP algorithm outperforms the conventional single long trajectory simulation approach as it is able to consistently discover more states as a function of simulation time.