In Model-based Reinforcement Learning, Generative And Temporal Models Of Environments Can Be Leveraged To Boost Agent Performance, Either By Tuning The Agent's Representations During Training Or Via Use As Part Of An Explicit Planning Mechanism. However, Their Application In Practice Has Been Limited To Simplistic Environments, Due To The Difficulty Of Training Such Models In Larger, Potentially Partially-observed And 3d Environments. In This Work We Introduce A Novel Action-conditioned Generative Model Of Such Challenging Environments. The Model Features A Non-parametric Spatial Memory System In Which We Store Learned, Disentangled Representations Of The Environment. Low-dimensional Spatial Updates Are Computed Using A State-space Model That Makes Use Of Knowledge On The Prior Dynamics Of The Moving Agent, And High-dimensional Visual Observations Are Modelled With A Variational Auto-encoder. The Result Is A Scalable Architecture Capable Of Performing Coherent Predictions Over Hundreds Of Time Steps Across A Range Of Partially Observed 2d And 3d Environments.