SpaceNet 4: Off-Nadir Imagery Analysis for Building Footprint Detection
Can mapping be automated from off-nadir imagery?
In many time-sensitive applications, such as humanitarian response operations, overhead imagery is often taken “off-nadir” – that is, not from directly overhead – particularly immediately following an event or in other urgent collection contexts. Despite significant advances in using machine learning and computer vision to automate detection of objects like automobiles, aircraft, and vehicles in overhead imagery, no one had tested if the approaches would work on off-nadir images. CosmiQ led the SpaceNet 4 Challenge which asked participants to develop machine learning algorithms to identify buildings in images from the new SpaceNet Atlanta Off-Nadir Dataset. The dataset comprises 27 distinct images over Atlanta, GA taken during a single overhead pass of the DigitalGlobe WorldView-2 satellite. These images range from 7º (nearly directly overhead) to 54º off-nadir (very off-angle and consistent with urgent collection data) to include both North and South-facing views. Alongside the imagery we released building labels for the same 665 km2 area covered by the imagery. Shadows, distortion, and resolution vary dramatically across these images, presenting a complete picture of the challenges posed by off-nadir imagery.
Nearly 250 competitors registered for the two-month challenge, and the winners improved baseline performance by about 40%. Once the challenge was completed, we performed a deep dive into their solutions to determine how their algorithms optimized building footprint extraction from off-nadir images, where they failed, and where future research should focus to address this difficult task. Results from these analyses can be found in our blog posts and published papers.
Learn more at www.spacenet.ai.