• Skip to main content
CosmiqWorks
  • cosmiqworks-logo-r@2x
  • Projects
  • Blog
  • Podcasts
  • Resources
  • About
  •         

Archived Projects

SpaceNet 7

June 17, 2020 by christynz

SpaceNet 7

Multi-Temporal Urban Development Challenge

SEE ALL PROJECTS

Quantifying population statistics is fundamental to 67 of the 232 United Nations Sustainable Development Goals, but the World Bank estimates that more than 100 countries currently lack effective Civil Registration systems. The SpaceNet 7 Multi-Temporal Urban Development Challenge aims to help address this deficit and develop novel computer vision methods for non-video time series data. In this challenge, participants will identify and track buildings in satellite imagery time series collected over rapidly urbanizing areas. The competition centers around a new open source dataset of Planet satellite imagery mosaics, which will include 24 images (one per month) covering ~100 unique geographies. The dataset will comprise 40,000 km2 of imagery and exhaustive polygon labels of building footprints in the imagery, totaling over 3M individual annotations. Challenge participants will be asked to track building construction over time, thereby directly assessing urbanization.

This Challenge has broad implications for disaster preparedness, the environment, infrastructure development, and epidemic prevention. Beyond the humanitarian applications, this competition poses a unique challenge from a computer vision standpoint because of the small pixel area of each object, the high object density within images, and the dramatic image-to-image difference compared to frame-to-frame variation in video object tracking. We believe this challenge will aid efforts to develop useful tools for overhead change detection.

SpaceNet 7 will be featured as a competition at the 2020 NeurIPS conference in December, where winning results will also be announced.

Learn More

RELATED POSTS

  • The SpaceNet 7 Multi-Temporal Urban Development Challenge Algorithmic Baseline

  • The SpaceNet Change and Object Tracking (SCOT) Metric

  • The SpaceNet 7 Multi-Temporal Urban Development Challenge: Dataset Release

  • Announcing SpaceNet 7: The Multi-Temporal Urban Development Challenge

Filed Under: Archived Projects Tagged With: datasets

CRESI

January 18, 2020 by christynz

City-Scale Road Extraction from Satellite Imagery (CRESI)

Rapidly extracts large scale road networks and identifies speed limits and route travel times for each roadway

SEE ALL PROJECTS

Optimized routing is crucial to a number of challenges, from humanitarian to military. Satellite imagery may aid greatly in determining efficient routes, particularly in cases involving natural disasters or other dynamic events where the high revisit rate of satellites may be able to provide updates far more quickly than terrestrial methods.  Existing data collection methods such as manual road labeling or aggregation of mobile GPS tracks are currently insufficient to properly capture either underserved regions (due to infrequent data collection), or the dynamic changes inherent to road networks in rapidly changing environments.

Our City-Scale Road Extraction from Satellite Imagery (CRESI) algorithm served as the baseline for SpaceNet 5, and rapidly extracts large scale road networks and identifies speed limits and route travel times for each roadway.  Including estimates for travel time permits true optimal routing (rather than just the shortest geographic distance), which is not possible with existing remote sensing imagery based methods.

Our code is publicly available at github.com/CosmiQ/cresi.

RELATED POSTS

  • City-Scale Road Extraction from Satellite Imagery v2_Road Speeds and Travel Times
  • Road Network and Travel Time Extraction from Multiple Look Angles with SpaceNet Data

  • Time-optimized Evacuation Scenarios Via Satellite Imagery
  • Road Network and Travel Time Extraction from Multiple Look Angles with SpaceNet Data
  • Computer Vision With OpenStreetMap and SpaceNet — A Comparison
  • Inferring Route Travel Times with SpaceNet
  • Extracting Road Networks at Scale with SpaceNet
GITHUB

Filed Under: Archived Projects Tagged With: models, software

SpaceNet 6

November 11, 2019 by rocky

SpaceNet 6

Multimodal Data Analysis for Travel Routing and Time Estimation

SEE ALL PROJECTS

Synthetic Aperture Radar (SAR) is a unique form of radar that can penetrate clouds, collect during all- weather conditions, and capture data day and night. Overhead collects from SAR satellites could be particularly valuable in the quest to aid disaster response in instances where weather and cloud cover can obstruct traditional electro-optical sensors. However, despite these advantages, there is limited open data available to researchers to explore the effectiveness of SAR for such applications, particularly at ultra-high resolutions.

The task of SpaceNet 6 is to automatically extract building footprints with computer vision and artificial intelligence (AI) algorithms using a combination of SAR and electro-optical imagery datasets. This openly-licensed dataset features a unique combination of half-meter Synthetic Aperture Radar (SAR) imagery from Capella Space and half-meter electro-optical (EO) imagery from Maxar’s WorldView 2 satellite. The area of interest for this challenge will be centered over the largest port in Europe: Rotterdam, the Netherlands. This area features thousands of buildings, vehicles, and boats of various sizes, to make an effective test bed for SAR and the fusion of these two types of data.

In this challenge, the training dataset contains both SAR and EO imagery, however, the testing and scoring datasets contain only SAR data. Consequently, the EO data can be used for pre-processing the SAR data in some fashion, such as colorization, domain adaptation, or image translation, but cannot be used to directly map buildings. The dataset is structured to mimic real-world scenarios where historical EO data may be available, but concurrent EO collection with SAR is often not possible due to inconsistent orbits of the sensors, or cloud cover that will render the EO data unusable.

Learn More

RELATED POSTS

  • SpaceNet 6: Expanded Dataset Release

  • SpaceNet 6: Winning Model Release

  • SpaceNet 6: Data Fusion and Colorization

  • SpaceNet 6: Exploring Foundational Mapping at Scale

  • SpaceNet 6: Announcing the Winners

  • SpaceNet 6 Challenge Launch

  • The SpaceNet 6 Baseline

  • SAR 201: An Introduction to Synthetic Aperture Radar, Part 2

  • SAR 101: An Introduction to Synthetic Aperture Radar

  • SpaceNet 6: Dataset Release

  • Announcing SpaceNet 6: Multi-Sensor All Weather Mapping

Filed Under: Archived Projects Tagged With: datasets, models, software

RarePlanes

November 5, 2019 by rocky

RarePlanes

Investigating the Value of Synthetic Data to Detect and Classify Aircraft

SEE ALL PROJECTS

RarePlanes is a unique open-source machine learning dataset from CosmiQ Works and AI.Reverie that incorporates both real and synthetically generated satellite imagery.

 The RarePlanes dataset specifically focuses on the value of AI.Reverie synthetic data to aid computer vision algorithms in their ability to automatically detect aircraft and their attributes in satellite imagery. Although other synthetic/real combination datasets exist, RarePlanes is the largest openly-available very-high resolution dataset built to test the value of synthetic data from an overhead perspective. Previous research has shown that synthetic data can reduce the amount of real training data needed and potentially improve performance for many tasks in the computer vision domain. The real portion of the dataset consists of 253 Maxar WorldView-3 satellite scenes spanning 112 locations and 2,142 km^2 with 14,700 hand-annotated aircraft. The accompanying synthetic dataset is generated via AI.Reverie’s novel simulation platform and features 50,000 synthetic satellite images with ~630,000 aircraft annotations. Both the real and synthetically generated aircraft feature 10 fine grain attributes including: aircraft length, wingspan, wing-shape, wing-position, wingspan class, propulsion, number of engines, number of vertical-stabilizers, presence of canards, and aircraft role. Finally, we conduct extensive experiments to evaluate the real and synthetic datasets and compare performances. By doing so, we show the value of synthetic data for the task of detecting and classifying aircraft from an overhead perspective.

RarePlanes also included an experimental portion using an expanded version of the public dataset.  The experiments focused on addressing these two areas:

  1. The performance tradeoffs of computer vision algorithms for detection and classification of aircraft type / model using blends of synthetic and real training data.
  2. The performance tradeoffs of computer vision algorithms for identification of rare aircraft that are infrequently observed in satellite imagery using blends of synthetic and real training data.

The RarePlanes blog series includes 4 on the intial experiments and a penultimate blog featuring the dataset release:

  1. RarePlanes — An Introduction
  2. RarePlanes — Training our Baselines and Initial Results
  3. RarePlanes — Exploring the Value of Synthetic Data: Part 1
  4. RarePlanes — Exploring the Value of Synthetic Data: Part 2
  5. RarePlanes — Dataset, Paper, and Code Release
Download Dataset

Synthetic Data Example

Real Data Example

RELATED POSTS

  • You Only Look Once — Multi-Faceted Object Detection w/ RarePlanes

  • RarePlanes — Dataset, Paper, and Code Release

  • RarePlanes — Exploring the Value of Synthetic Data: Part 2

  • RarePlanes — Exploring the Value of Synthetic Data: Part 1

  • RarePlanes – An Introduction

Filed Under: Archived Projects Tagged With: datasets, models

Solaris

September 1, 2019 by rocky

Solaris

An open source Python library for analyzing overhead imagery with machine learning

SEE ALL PROJECTS

Performing machine learning (ML) and analyzing geospatial data are both hard problems requiring a lot of domain expertise. These limitations have historically meant that one needs to be an expert in both to perform even the most basic analyses, making advances in AI for overhead imagery difficult to achieve. We asked ourselves: is there anything we can do to reduce this barrier to entry, making it easier to apply machine learning methods to overhead imagery data? Enter Solaris, a new Python library for ML analysis of geospatial data.

Solaris builds upon SpaceNet’s previous tool suite, SpaceNetUtilities, along with several other CosmiQ projects like BASISS to provide an end-to-end pipeline for geospatial AI. Solaris provides well-documented Python APIs and simple command line tools to complete every step of a geospatial ML pipeline with ease, including:

  • Tile raw imagery and vector labels into pieces compatible with ML
  • Convert vector labels to ML-compatible pixel masks
  • Train state-of-the-art deep learning models with three lines of Python code
  • Segment objects of interest with machine learning models (including the SpaceNet winners’ models, with pre-trained weights and configs provided!)
  • Georegister predictions and convert them to standardized geospatial data formats
  • Score model performance against hand-labeled ground truth using the SpaceNet datasets

Extensive documentation and tutorials are available for Solaris on the documentation page and on GitHub. The open source codebase is available under an Apache 2.0 license.

RELATED POSTS

  • Introducing the Solaris Multimodal Preprocessing Library

  • Solaris Model Deployment: From Start to Finish

  • Accelerating your geospatial deep learning pipeline with fine-tuning
  • Beyond Infrastructure Mapping — Finding Vehicles with Solaris
  • Announcing Solaris: an open source Python library for analyzing overhead imagery with machine learning
GITHUB

Filed Under: Archived Projects Tagged With: advisory, software

SpaceNet 5

August 1, 2019 by rocky

SpaceNet 5

Road Network Detection, Routing Information, and Travel Time Extraction

SEE ALL PROJECTS

SpaceNet accelerates research and innovation in geospatial machine learning by developing and providing publicly available commercial satellite imagery and labeled training data, as well as open sourcing computer vision algorithms and tools.

The SpaceNet 5 challenge focused on road network detection and routing information and travel time extraction. Optimized routing is crucial to a number of challenges, from humanitarian to military. Satellite imagery may aid greatly in determining efficient routes, particularly in cases of natural disasters or other dynamic events where the high revisit rate of satellites may be able to provide updates far faster than terrestrial methods.

Learn more at www.spacenet.ai

9_SN5_Text_Block_Image

RELATED POSTS

  • The SpaceNet 5 Baseline — Part 3: Extracting Road Speed Vectors from Satellite Imagery
  • The SpaceNet 5 Baseline — Part 2: Training a Road Speed Segmentation Model
  • The SpaceNet 5 Baseline — Part 1: Imagery and Label Preparation
  • Computer Vision With OpenStreetMap and SpaceNet — A Comparison
  • SpaceNet 5 Dataset Release
  • Announcing SpaceNet 5: Road Networks and Optimized Routing

Filed Under: Archived Projects Tagged With: datasets, models

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »
  • Projects
  • Podcasts
  • Blog
  • Resources
  • About

  • Copyright © 2019 · IQT Labs LLC - All Rights Reserved | Terms of Use | Privacy Policy

We use cookies to analyze the usage of our websites and give you a better experience. You consent to our cookies if you click on “Agree” and continue to use our website. Read our Privacy Policy for more information and to know how to amend your settings.AgreePrivacy policy