A major benefit of intelligent and autonomous vehicles is their ability to navigate through hazardous environments that pose a significant danger to humans. In such environments, eventual damage to vehicle sensors is often inevitable. To address this threat to vehicle function, we propose a more robust system in which information from alternative sensors is leveraged to restore navigation capabilities in the case of primary sensor failure. This system employs image translation methods that enable the vehicle to use images generated from an auxiliary camera to synthesize the display of the primary camera. In this work, we present a conditional Generative Adversarial Network (cGAN) based method for view translation coupled with a Residual Neural Network for imitation learning. We evaluate our approach in the CARLA simulator and demonstrate its ability to restore navigation capabilities to a real-world vehicle by generating a front-view image from a left-camera view.