For safe navigation through an environment, autonomous ground vehicles rely on sensory inputs such as cameras, LiDAR, and radar for detection and classification of obstacles and impassable terrain. These sensors provide data representing 3D space surrounding the vehicle. Often this data is obscured by dust, precipitation, objects, or terrain, producing gaps in the sensor field of view. These gaps, or occlusions, can indicate the presence of obstacles, negative obstacles, or rough terrain. Because sensors receive no data in these occlusions, sensor data provides no explicit information about what might be found in the occluded areas. To provide the navigation system with a more complete model of the environment, information about the occlusions must be inferred from sensor data. In this paper we show a probabilistic method for mapping point cloud occlusions in real-time and how knowledge of these occlusions can be integrated into an autonomous vehicle obstacle detection and avoidance system.