• Skip to main content
CosmiqWorks
  • cosmiqworks-logo-r@2x
  • Projects
  • Blog
  • Podcasts
  • Resources
  • About
  •         

rocky

SpaceNet 6

November 11, 2019 by rocky

SpaceNet 6

Multimodal Data Analysis for Travel Routing and Time Estimation

SEE ALL PROJECTS

Synthetic Aperture Radar (SAR) is a unique form of radar that can penetrate clouds, collect during all- weather conditions, and capture data day and night. Overhead collects from SAR satellites could be particularly valuable in the quest to aid disaster response in instances where weather and cloud cover can obstruct traditional electro-optical sensors. However, despite these advantages, there is limited open data available to researchers to explore the effectiveness of SAR for such applications, particularly at ultra-high resolutions.

The task of SpaceNet 6 is to automatically extract building footprints with computer vision and artificial intelligence (AI) algorithms using a combination of SAR and electro-optical imagery datasets. This openly-licensed dataset features a unique combination of half-meter Synthetic Aperture Radar (SAR) imagery from Capella Space and half-meter electro-optical (EO) imagery from Maxar’s WorldView 2 satellite. The area of interest for this challenge will be centered over the largest port in Europe: Rotterdam, the Netherlands. This area features thousands of buildings, vehicles, and boats of various sizes, to make an effective test bed for SAR and the fusion of these two types of data.

In this challenge, the training dataset contains both SAR and EO imagery, however, the testing and scoring datasets contain only SAR data. Consequently, the EO data can be used for pre-processing the SAR data in some fashion, such as colorization, domain adaptation, or image translation, but cannot be used to directly map buildings. The dataset is structured to mimic real-world scenarios where historical EO data may be available, but concurrent EO collection with SAR is often not possible due to inconsistent orbits of the sensors, or cloud cover that will render the EO data unusable.

Learn More

RELATED POSTS

  • SpaceNet 6: Expanded Dataset Release

  • SpaceNet 6: Winning Model Release

  • SpaceNet 6: Data Fusion and Colorization

  • SpaceNet 6: Exploring Foundational Mapping at Scale

  • SpaceNet 6: Announcing the Winners

  • SpaceNet 6 Challenge Launch

  • The SpaceNet 6 Baseline

  • SAR 201: An Introduction to Synthetic Aperture Radar, Part 2

  • SAR 101: An Introduction to Synthetic Aperture Radar

  • SpaceNet 6: Dataset Release

  • Announcing SpaceNet 6: Multi-Sensor All Weather Mapping

Filed Under: Archived Projects Tagged With: datasets, models, software

RarePlanes

November 5, 2019 by rocky

RarePlanes

Investigating the Value of Synthetic Data to Detect and Classify Aircraft

SEE ALL PROJECTS

RarePlanes is a unique open-source machine learning dataset from CosmiQ Works and AI.Reverie that incorporates both real and synthetically generated satellite imagery.

 The RarePlanes dataset specifically focuses on the value of AI.Reverie synthetic data to aid computer vision algorithms in their ability to automatically detect aircraft and their attributes in satellite imagery. Although other synthetic/real combination datasets exist, RarePlanes is the largest openly-available very-high resolution dataset built to test the value of synthetic data from an overhead perspective. Previous research has shown that synthetic data can reduce the amount of real training data needed and potentially improve performance for many tasks in the computer vision domain. The real portion of the dataset consists of 253 Maxar WorldView-3 satellite scenes spanning 112 locations and 2,142 km^2 with 14,700 hand-annotated aircraft. The accompanying synthetic dataset is generated via AI.Reverie’s novel simulation platform and features 50,000 synthetic satellite images with ~630,000 aircraft annotations. Both the real and synthetically generated aircraft feature 10 fine grain attributes including: aircraft length, wingspan, wing-shape, wing-position, wingspan class, propulsion, number of engines, number of vertical-stabilizers, presence of canards, and aircraft role. Finally, we conduct extensive experiments to evaluate the real and synthetic datasets and compare performances. By doing so, we show the value of synthetic data for the task of detecting and classifying aircraft from an overhead perspective.

RarePlanes also included an experimental portion using an expanded version of the public dataset.  The experiments focused on addressing these two areas:

  1. The performance tradeoffs of computer vision algorithms for detection and classification of aircraft type / model using blends of synthetic and real training data.
  2. The performance tradeoffs of computer vision algorithms for identification of rare aircraft that are infrequently observed in satellite imagery using blends of synthetic and real training data.

The RarePlanes blog series includes 4 on the intial experiments and a penultimate blog featuring the dataset release:

  1. RarePlanes — An Introduction
  2. RarePlanes — Training our Baselines and Initial Results
  3. RarePlanes — Exploring the Value of Synthetic Data: Part 1
  4. RarePlanes — Exploring the Value of Synthetic Data: Part 2
  5. RarePlanes — Dataset, Paper, and Code Release
Download Dataset

Synthetic Data Example

Real Data Example

RELATED POSTS

  • You Only Look Once — Multi-Faceted Object Detection w/ RarePlanes

  • RarePlanes — Dataset, Paper, and Code Release

  • RarePlanes — Exploring the Value of Synthetic Data: Part 2

  • RarePlanes — Exploring the Value of Synthetic Data: Part 1

  • RarePlanes – An Introduction

Filed Under: Archived Projects Tagged With: datasets, models

Solaris

September 1, 2019 by rocky

Solaris

An open source Python library for analyzing overhead imagery with machine learning

SEE ALL PROJECTS

Performing machine learning (ML) and analyzing geospatial data are both hard problems requiring a lot of domain expertise. These limitations have historically meant that one needs to be an expert in both to perform even the most basic analyses, making advances in AI for overhead imagery difficult to achieve. We asked ourselves: is there anything we can do to reduce this barrier to entry, making it easier to apply machine learning methods to overhead imagery data? Enter Solaris, a new Python library for ML analysis of geospatial data.

Solaris builds upon SpaceNet’s previous tool suite, SpaceNetUtilities, along with several other CosmiQ projects like BASISS to provide an end-to-end pipeline for geospatial AI. Solaris provides well-documented Python APIs and simple command line tools to complete every step of a geospatial ML pipeline with ease, including:

  • Tile raw imagery and vector labels into pieces compatible with ML
  • Convert vector labels to ML-compatible pixel masks
  • Train state-of-the-art deep learning models with three lines of Python code
  • Segment objects of interest with machine learning models (including the SpaceNet winners’ models, with pre-trained weights and configs provided!)
  • Georegister predictions and convert them to standardized geospatial data formats
  • Score model performance against hand-labeled ground truth using the SpaceNet datasets

Extensive documentation and tutorials are available for Solaris on the documentation page and on GitHub. The open source codebase is available under an Apache 2.0 license.

RELATED POSTS

  • Introducing the Solaris Multimodal Preprocessing Library

  • Solaris Model Deployment: From Start to Finish

  • Accelerating your geospatial deep learning pipeline with fine-tuning
  • Beyond Infrastructure Mapping — Finding Vehicles with Solaris
  • Announcing Solaris: an open source Python library for analyzing overhead imagery with machine learning
GITHUB

Filed Under: Archived Projects Tagged With: advisory, software

SpaceNet 5

August 1, 2019 by rocky

SpaceNet 5

Road Network Detection, Routing Information, and Travel Time Extraction

SEE ALL PROJECTS

SpaceNet accelerates research and innovation in geospatial machine learning by developing and providing publicly available commercial satellite imagery and labeled training data, as well as open sourcing computer vision algorithms and tools.

The SpaceNet 5 challenge focused on road network detection and routing information and travel time extraction. Optimized routing is crucial to a number of challenges, from humanitarian to military. Satellite imagery may aid greatly in determining efficient routes, particularly in cases of natural disasters or other dynamic events where the high revisit rate of satellites may be able to provide updates far faster than terrestrial methods.

Learn more at www.spacenet.ai

9_SN5_Text_Block_Image

RELATED POSTS

  • The SpaceNet 5 Baseline — Part 3: Extracting Road Speed Vectors from Satellite Imagery
  • The SpaceNet 5 Baseline — Part 2: Training a Road Speed Segmentation Model
  • The SpaceNet 5 Baseline — Part 1: Imagery and Label Preparation
  • Computer Vision With OpenStreetMap and SpaceNet — A Comparison
  • SpaceNet 5 Dataset Release
  • Announcing SpaceNet 5: Road Networks and Optimized Routing

Filed Under: Archived Projects Tagged With: datasets, models

Machine Learning Robustness Study

July 1, 2019 by rocky

Machine Learning Robustness Study

SEE ALL PROJECTS

Within the broader computer vision community, the issue of dataset size has received surprisingly little attention. Most analyses simply use all available data and focus on model architecture, with scant attention given to whether the dataset size is appropriate for the task and architecture’s complexity.

Many different variables determine the ultimate mission impact of satellite imagery, a concept CosmiQ has referred to as the Satellite Utility Manifold. Previous CosmiQ studies have explored such variables as sensor resolution (0.3 meter to 2.4 meter), super-resolution techniques, and the number of imaging bands (grayscale versus multispectral).

Expanding on this work, the Machine Learning Robustness Study focuses on training dataset size and diversity on building detection performance in the SpaceNet data. The recent availability of this extensive dataset and model-building capability will make it possible to address dependence on geography and dataset size at the leading edge of geospatial machine learning.

RELATED POSTS

  • Predicting the Effect of More Training Data, by Using Less

  • Robustness of Limited Training Data: Part 5
  • Robustness of Limited Training Data: Part 4
  • Robustness of Limited Training Data: Part 3
  • Robustness of Limited Training Data: Part 2
  • Robustness of Limited Training Data for Building Footprint Identification: Part 1

Filed Under: Archived Projects Tagged With: advisory, models

Super-Resolution Trade Study

June 1, 2019 by rocky

Super-Resolution Trade Study

Quantifying the Effects of Super-Resolution on Object Detection Performance in Satellite Imagery

SEE ALL PROJECTS

At the inception of this research, the interplay between super-resolution techniques and object detection frameworks remained largely unexplored, particularly in the context of satellite or overhead imagery. Intuitively, super-resolution methods should increase object detection performance, as an increase in resolution should add more distinguishable features that an object detection algorithm can use for discrimination.

This trade study strived to answer these foundational questions:

  • Does the application of a super-resolution (SR) technique affect the ability to detect small objects in satellite imagery?
  • Across what resolutions are these SR techniques effective?
  • What is an ideal or minimum viable resolution for object detection?
  • Can one artificially double or even quadruple the native resolution of coarser imagery to make the data more useful and increase the ability to detect fine objects?

Our results showed that the application of SR techniques as a pre-processing step provided a statistically significant improvement in object detection performance in the finest resolutions.  For both object detection frameworks, the greatest benefit is achieved at the highest resolutions, as super-resolving native 30 cm imagery to 15 cm yields a 13−36% improvement in mean average precision. Furthermore, when using YOLT, we find that enhancing imagery from 60 cm to 15 cm provides a significant boost in performance over both the native 30 cm imagery (+13%) and native 60 cm imagery (+20%).

RELATED POSTS

  • Super-Resolution and Object Detection: A Love Story- Epilogue
  • Super-Resolution and Object Detection: A Love Story- Part 4
  • Super-Resolution and Object Detection: A Love Story- Part 3
  • Super-Resolution and Object Detection: A Love Story – Part 2
  • Super-Resolution and Object Detection: A Love Story – Part 1
  • The Effects of Super-Resolution on Object Detection Performance in Satellite Imagery

Filed Under: Archived Projects Tagged With: advisory, models

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »
  • Projects
  • Podcasts
  • Blog
  • Resources
  • About

  • Copyright © 2019 · IQT Labs LLC - All Rights Reserved | Terms of Use | Privacy Policy

We use cookies to analyze the usage of our websites and give you a better experience. You consent to our cookies if you click on “Agree” and continue to use our website. Read our Privacy Policy for more information and to know how to amend your settings.AgreePrivacy policy