• Skip to main content
CosmiqWorks
  • cosmiqworks-logo-r@2x
  • Projects
  • Blog
  • Podcasts
  • Resources
  • About
  •         

datasets

SpaceNet 7

June 17, 2020 by christynz

SpaceNet 7

Multi-Temporal Urban Development Challenge

SEE ALL PROJECTS

Quantifying population statistics is fundamental to 67 of the 232 United Nations Sustainable Development Goals, but the World Bank estimates that more than 100 countries currently lack effective Civil Registration systems. The SpaceNet 7 Multi-Temporal Urban Development Challenge aims to help address this deficit and develop novel computer vision methods for non-video time series data. In this challenge, participants will identify and track buildings in satellite imagery time series collected over rapidly urbanizing areas. The competition centers around a new open source dataset of Planet satellite imagery mosaics, which will include 24 images (one per month) covering ~100 unique geographies. The dataset will comprise 40,000 km2 of imagery and exhaustive polygon labels of building footprints in the imagery, totaling over 3M individual annotations. Challenge participants will be asked to track building construction over time, thereby directly assessing urbanization.

This Challenge has broad implications for disaster preparedness, the environment, infrastructure development, and epidemic prevention. Beyond the humanitarian applications, this competition poses a unique challenge from a computer vision standpoint because of the small pixel area of each object, the high object density within images, and the dramatic image-to-image difference compared to frame-to-frame variation in video object tracking. We believe this challenge will aid efforts to develop useful tools for overhead change detection.

SpaceNet 7 will be featured as a competition at the 2020 NeurIPS conference in December, where winning results will also be announced.

Learn More

RELATED POSTS

  • The SpaceNet 7 Multi-Temporal Urban Development Challenge Algorithmic Baseline

  • The SpaceNet Change and Object Tracking (SCOT) Metric

  • The SpaceNet 7 Multi-Temporal Urban Development Challenge: Dataset Release

  • Announcing SpaceNet 7: The Multi-Temporal Urban Development Challenge

Filed Under: Archived Projects Tagged With: datasets

SpaceNet 6

November 11, 2019 by rocky

SpaceNet 6

Multimodal Data Analysis for Travel Routing and Time Estimation

SEE ALL PROJECTS

Synthetic Aperture Radar (SAR) is a unique form of radar that can penetrate clouds, collect during all- weather conditions, and capture data day and night. Overhead collects from SAR satellites could be particularly valuable in the quest to aid disaster response in instances where weather and cloud cover can obstruct traditional electro-optical sensors. However, despite these advantages, there is limited open data available to researchers to explore the effectiveness of SAR for such applications, particularly at ultra-high resolutions.

The task of SpaceNet 6 is to automatically extract building footprints with computer vision and artificial intelligence (AI) algorithms using a combination of SAR and electro-optical imagery datasets. This openly-licensed dataset features a unique combination of half-meter Synthetic Aperture Radar (SAR) imagery from Capella Space and half-meter electro-optical (EO) imagery from Maxar’s WorldView 2 satellite. The area of interest for this challenge will be centered over the largest port in Europe: Rotterdam, the Netherlands. This area features thousands of buildings, vehicles, and boats of various sizes, to make an effective test bed for SAR and the fusion of these two types of data.

In this challenge, the training dataset contains both SAR and EO imagery, however, the testing and scoring datasets contain only SAR data. Consequently, the EO data can be used for pre-processing the SAR data in some fashion, such as colorization, domain adaptation, or image translation, but cannot be used to directly map buildings. The dataset is structured to mimic real-world scenarios where historical EO data may be available, but concurrent EO collection with SAR is often not possible due to inconsistent orbits of the sensors, or cloud cover that will render the EO data unusable.

Learn More

RELATED POSTS

  • SpaceNet 6: Expanded Dataset Release

  • SpaceNet 6: Winning Model Release

  • SpaceNet 6: Data Fusion and Colorization

  • SpaceNet 6: Exploring Foundational Mapping at Scale

  • SpaceNet 6: Announcing the Winners

  • SpaceNet 6 Challenge Launch

  • The SpaceNet 6 Baseline

  • SAR 201: An Introduction to Synthetic Aperture Radar, Part 2

  • SAR 101: An Introduction to Synthetic Aperture Radar

  • SpaceNet 6: Dataset Release

  • Announcing SpaceNet 6: Multi-Sensor All Weather Mapping

Filed Under: Archived Projects Tagged With: datasets, models, software

RarePlanes

November 5, 2019 by rocky

RarePlanes

Investigating the Value of Synthetic Data to Detect and Classify Aircraft

SEE ALL PROJECTS

RarePlanes is a unique open-source machine learning dataset from CosmiQ Works and AI.Reverie that incorporates both real and synthetically generated satellite imagery.

 The RarePlanes dataset specifically focuses on the value of AI.Reverie synthetic data to aid computer vision algorithms in their ability to automatically detect aircraft and their attributes in satellite imagery. Although other synthetic/real combination datasets exist, RarePlanes is the largest openly-available very-high resolution dataset built to test the value of synthetic data from an overhead perspective. Previous research has shown that synthetic data can reduce the amount of real training data needed and potentially improve performance for many tasks in the computer vision domain. The real portion of the dataset consists of 253 Maxar WorldView-3 satellite scenes spanning 112 locations and 2,142 km^2 with 14,700 hand-annotated aircraft. The accompanying synthetic dataset is generated via AI.Reverie’s novel simulation platform and features 50,000 synthetic satellite images with ~630,000 aircraft annotations. Both the real and synthetically generated aircraft feature 10 fine grain attributes including: aircraft length, wingspan, wing-shape, wing-position, wingspan class, propulsion, number of engines, number of vertical-stabilizers, presence of canards, and aircraft role. Finally, we conduct extensive experiments to evaluate the real and synthetic datasets and compare performances. By doing so, we show the value of synthetic data for the task of detecting and classifying aircraft from an overhead perspective.

RarePlanes also included an experimental portion using an expanded version of the public dataset.  The experiments focused on addressing these two areas:

  1. The performance tradeoffs of computer vision algorithms for detection and classification of aircraft type / model using blends of synthetic and real training data.
  2. The performance tradeoffs of computer vision algorithms for identification of rare aircraft that are infrequently observed in satellite imagery using blends of synthetic and real training data.

The RarePlanes blog series includes 4 on the intial experiments and a penultimate blog featuring the dataset release:

  1. RarePlanes — An Introduction
  2. RarePlanes — Training our Baselines and Initial Results
  3. RarePlanes — Exploring the Value of Synthetic Data: Part 1
  4. RarePlanes — Exploring the Value of Synthetic Data: Part 2
  5. RarePlanes — Dataset, Paper, and Code Release
Download Dataset

Synthetic Data Example

Real Data Example

RELATED POSTS

  • You Only Look Once — Multi-Faceted Object Detection w/ RarePlanes

  • RarePlanes — Dataset, Paper, and Code Release

  • RarePlanes — Exploring the Value of Synthetic Data: Part 2

  • RarePlanes — Exploring the Value of Synthetic Data: Part 1

  • RarePlanes – An Introduction

Filed Under: Archived Projects Tagged With: datasets, models

SpaceNet 5

August 1, 2019 by rocky

SpaceNet 5

Road Network Detection, Routing Information, and Travel Time Extraction

SEE ALL PROJECTS

SpaceNet accelerates research and innovation in geospatial machine learning by developing and providing publicly available commercial satellite imagery and labeled training data, as well as open sourcing computer vision algorithms and tools.

The SpaceNet 5 challenge focused on road network detection and routing information and travel time extraction. Optimized routing is crucial to a number of challenges, from humanitarian to military. Satellite imagery may aid greatly in determining efficient routes, particularly in cases of natural disasters or other dynamic events where the high revisit rate of satellites may be able to provide updates far faster than terrestrial methods.

Learn more at www.spacenet.ai

9_SN5_Text_Block_Image

RELATED POSTS

  • The SpaceNet 5 Baseline — Part 3: Extracting Road Speed Vectors from Satellite Imagery
  • The SpaceNet 5 Baseline — Part 2: Training a Road Speed Segmentation Model
  • The SpaceNet 5 Baseline — Part 1: Imagery and Label Preparation
  • Computer Vision With OpenStreetMap and SpaceNet — A Comparison
  • SpaceNet 5 Dataset Release
  • Announcing SpaceNet 5: Road Networks and Optimized Routing

Filed Under: Archived Projects Tagged With: datasets, models

SpaceNet 4

April 27, 2019 by rocky

SpaceNet 4: Off-Nadir Imagery Analysis for Building Footprint Detection

SEE ALL PROJECTS

Can mapping be automated from off-nadir imagery?

In many time-sensitive applications, such as humanitarian response operations, overhead imagery is often taken “off-nadir” – that is, not from directly overhead – particularly immediately following an event or in other urgent collection contexts. Despite significant advances in using machine learning and computer vision to automate detection of objects like automobiles, aircraft, and vehicles in overhead imagery, no one had tested if the approaches would work on off-nadir images. CosmiQ led the SpaceNet 4 Challenge which asked participants to develop machine learning algorithms to identify buildings in images from the new SpaceNet Atlanta Off-Nadir Dataset. The dataset comprises 27 distinct images over Atlanta, GA taken during a single overhead pass of the DigitalGlobe WorldView-2 satellite. These images range from 7º (nearly directly overhead) to 54º off-nadir (very off-angle and consistent with urgent collection data) to include both North and South-facing views. Alongside the imagery we released building labels for the same 665 km2 area covered by the imagery. Shadows, distortion, and resolution vary dramatically across these images, presenting a complete picture of the challenges posed by off-nadir imagery.

Nearly 250 competitors registered for the two-month challenge, and the winners improved baseline performance by about 40%. Once the challenge was completed, we performed a deep dive into their solutions to determine how their algorithms optimized building footprint extraction from off-nadir images, where they failed, and where future research should focus to address this difficult task. Results from these analyses can be found in our blog posts and published papers.

Learn more at www.spacenet.ai.

Related Posts

  • The good and the bad in the SpaceNet Off-Nadir Building Footprint Extraction Challenge
  • A deep dive into the SpaceNet 4 winning algorithms
  • The SpaceNet Challenge Off-Nadir Buildings: Introducing the winners
  • Challenges with SpaceNet 4 off-nadir satellite imagery: Look angle and target azimuth angle
  • A baseline model for the SpaceNet 4: Off-Nadir Building Detection Challenge
  • Introducing the SpaceNet Off-Nadir Imagery Dataset
  • SpaceNet MVOI: A Multi-View Overhead Imagery Dataset
GITHUB

Filed Under: Archived Projects Tagged With: datasets, models

SpaceNet 3

November 1, 2018 by christynz

SpaceNet 3: Road Network Detection

SEE ALL PROJECTS

Millions of kilometers of the worlds’ roadways remain unmapped. In fact, there are large organizations such as the Humanitarian OpenStreetMap Team (HOT) Missing Maps Project whose entire goal is to map missing areas. The SpaceNet 3 Road Detection and Routing Challenge was designed to assist the development of techniques for generating road networks from satellite imagery. The deployment of these techniques will hopefully expedite the development and publication of accurate maps.

The Challenge specifically asked participants to turn satellite imagery into usable road network vectors. For this challenge, we created a new metric, Average Path Length Similarity (APLS) for evaluating the similarity between a ground truth and proposal road network. We also created new feature labels specifically for this challenge. The new dataset consists of 8,000 km of road centerlines with associated attributes such as road type, surface type, and number of lanes. All roads were digitized from existing SpaceNet data — 30 cm GSD WorldView 3 satellite imagery over Las Vegas, Paris, Shanghai, and Khartoum.

The challenge was conducted from November 2017 to February 2018 and hosted on the Topcoder platform. It received 342 submissions from 33 challenge participants from the across the world. The code for the top five submissions were open sourced under the Apache 2 License on SpaceNet Github repository.

CosmiQ Works conducted this project in coordination with the other SpaceNet Partners: Radiant Solutions, Amazon Web Services, and NVIDIA.

Learn more at www.spacenet.ai.

Related Posts

  • SpaceNet Roads Extraction and Routing Challenge Solutions are Released
  • Creating Training Datasets for the SpaceNet Road Detection and Routing Challenge
  • Broad Area Satellite Imagery Semantic Segmentation (BASISS)
  • Introducing the SpaceNet Road Detection and Routing Challenge and Dataset
  • SpaceNet Road Detection and Routing Challenge Part II — APLS Implementation
  • SpaceNet Road Detection and Routing Challenge — Part I
GITHUB

Filed Under: Archived Projects Tagged With: datasets, models

  • Go to page 1
  • Go to page 2
  • Go to Next Page »
  • Projects
  • Podcasts
  • Blog
  • Resources
  • About

  • Copyright © 2019 · IQT Labs LLC - All Rights Reserved | Terms of Use | Privacy Policy

We use cookies to analyze the usage of our websites and give you a better experience. You consent to our cookies if you click on “Agree” and continue to use our website. Read our Privacy Policy for more information and to know how to amend your settings.AgreePrivacy policy