Awareness of the surroundings is strongly influenced by acoustic cues. This is of relevance for the implementation of safety strategies on board of electric and hybrid vehicles and for the development of acoustic camouflage of military vehicles. These two areas of research have clearly opposite goals, in that developers of electric vehicles aim at adding the minimum amount of exterior noise that will make the EV acoustically noticeable by a blind or distracted pedestrian, while the developers of military vehicles desire to implement hardware configurations with minimum likelihood of acoustic detectability. The common theme is the understanding of what makes a vehicle noticeable based the noise it generates and the environment in which it is immersed. Traditional approaches based on differences of overall level and/or one-third octave based spectra are too simplistic to represent complex scenarios such as urban scenes with multiple sources in the soundscape and significant amount of reverberation and diffraction effects. This paper will show that the signal processing techniques required to map acoustic perception need to provide more resolution than overall level or one-third octave band based spectra and that the temporal pattern of a sound should be considered.