• Skip to main content
CosmiqWorks
  • cosmiqworks-logo-r@2x
  • Projects
  • Blog
  • Podcasts
  • Resources
  • About
  •         

models

SIMRDWN

May 27, 2019 by rocky

Satellite Imagery Multiscale Rapid Detection with Windowed Networks (SIMRDWN) Repository

SEE ALL PROJECTS

Rapid detection of small objects over large areas remains one of the principal drivers of interest in satellite imagery analytics. This project sought to build off our previous work with the You Only Look Twice (YOLT) algorithm, which modified YOLO to rapidly analyze images of arbitrary size, and improves performance on small, densely packed objects.

Since YOLO is just one of many advanced object detection frameworks, however, and algorithms such as SSD, Faster R-CNN, and R-FCN merit investigation against geospatial applications as well, CosmiQ developed the Satellite Imagery Multiscale Rapid Detection with Windowed Networks (SIMRDWN) framework. SIMRDWN (phonetically: [SIM-er] [doun]) combined the scalable code base of YOLT with the TensorFlow Object Detection API, allowing end users to select a vast array of architectures to apply towards bounding box detection of objects in overhead imagery.

Related Posts

  • SIMRDWN: Adapting Multiple Object Detection Frameworks for Satellite Imagery Applications
  • Giving SIMRDWN a Spin, Part II
  • Giving SIMRDWN a Spin, Part I
  • Satellite Imagery Multiscale Rapid Detection with Windowed Networks

GITHUB

Filed Under: Archived Projects Tagged With: models, software

SpaceNet 4

April 27, 2019 by rocky

SpaceNet 4: Off-Nadir Imagery Analysis for Building Footprint Detection

SEE ALL PROJECTS

Can mapping be automated from off-nadir imagery?

In many time-sensitive applications, such as humanitarian response operations, overhead imagery is often taken “off-nadir” – that is, not from directly overhead – particularly immediately following an event or in other urgent collection contexts. Despite significant advances in using machine learning and computer vision to automate detection of objects like automobiles, aircraft, and vehicles in overhead imagery, no one had tested if the approaches would work on off-nadir images. CosmiQ led the SpaceNet 4 Challenge which asked participants to develop machine learning algorithms to identify buildings in images from the new SpaceNet Atlanta Off-Nadir Dataset. The dataset comprises 27 distinct images over Atlanta, GA taken during a single overhead pass of the DigitalGlobe WorldView-2 satellite. These images range from 7º (nearly directly overhead) to 54º off-nadir (very off-angle and consistent with urgent collection data) to include both North and South-facing views. Alongside the imagery we released building labels for the same 665 km2 area covered by the imagery. Shadows, distortion, and resolution vary dramatically across these images, presenting a complete picture of the challenges posed by off-nadir imagery.

Nearly 250 competitors registered for the two-month challenge, and the winners improved baseline performance by about 40%. Once the challenge was completed, we performed a deep dive into their solutions to determine how their algorithms optimized building footprint extraction from off-nadir images, where they failed, and where future research should focus to address this difficult task. Results from these analyses can be found in our blog posts and published papers.

Learn more at www.spacenet.ai.

Related Posts

  • The good and the bad in the SpaceNet Off-Nadir Building Footprint Extraction Challenge
  • A deep dive into the SpaceNet 4 winning algorithms
  • The SpaceNet Challenge Off-Nadir Buildings: Introducing the winners
  • Challenges with SpaceNet 4 off-nadir satellite imagery: Look angle and target azimuth angle
  • A baseline model for the SpaceNet 4: Off-Nadir Building Detection Challenge
  • Introducing the SpaceNet Off-Nadir Imagery Dataset
  • SpaceNet MVOI: A Multi-View Overhead Imagery Dataset
GITHUB

Filed Under: Archived Projects Tagged With: datasets, models

You Only Look Twice

January 27, 2019 by rocky

You Only Look Twice (YOLT)

SEE ALL PROJECTS

The You Only Look Twice (YOLT) object detection pipeline was designed to address some of the shortcomings identified in classic object detection techniques. The pipeline was based off of the You Only Look Once framework and dramatically improved performance of object detection at varying scales over legacy techniques. In order to tailor the framework for use on remote sensing data sets such as satellite imagery, YOLT provides three major modifications:

  1. Upsampling via a sliding window to look for small, densely packed objects
  2. Augment training data with re-scalings and rotations
  3. Define a new network architecture such that the final convolutional layer has a denser final grid

In late 2017, we released YOLT version 2 which incorporated a number of improvements to the original paper. These enhancements significantly improved the accuracy while maintaining a speed advantage over other options such as Faster R-CNN and SSD.

Related Posts

  • You Only Look Twice (Part II) — Vehicle and Infrastructure Detection in Satellite Imagery
  • You Only Look Twice — Multi-Scale Object Detection in Satellite Imagery With Convolutional Neural Networks (Part I)
  • Building Extraction with YOLT2 and SpaceNet Data
  • Car Localization and Counting with Overhead Imagery, an Interactive Exploration
  • YOLT arXiv Paper and Code Release
  • Car Detection Over Large Areas With YOLT and Zanzibar Open Aerial Imagery
GITHUB

Filed Under: Archived Projects Tagged With: models

Multispectral Imagery (MSI) & Deep Learning Analysis

December 27, 2018 by rocky

Multispectral Imagery (MSI) & Deep Learning Analysis

SEE ALL PROJECTS

This project explored the utility of visible and near infrared (VNIR) multispectral imagery (MSI) for training algorithms to artificially generate spectral information and for deep learning object detection algorithms. Initially, two leading object detection algorithms were adapted to analyze multispectral data. A performance comparison using these algorithms on grayscale, 3-band RGB, and 8-band multispectral imagery indicated a  performance advantage to three-band imagery over grayscale imagery. This finding motivated the study of methods to artificially generate color images from grayscale.

Sample SpaceNet cutout. Left: 3-band image. Right: Ground truth building labels.

To this end, a generative adversarial network (GAN) architecture was developed to artificially generate 3-band images from grayscale imagery and 8-band images from 3-band imagery. While the GAN recovered a majority of the multispectral information in the test images, some areas and objects were reconstructed with higher accuracy than others. To gauge the utility of GAN colorization, the performance of two leading object detection algorithms, Multi-task Network Cascades (MNC) and You Only Look Twice (YOLT), were tested using grayscale imagery, real 3-band imagery, and artificially generated 3-band imagery. While the best performance was achieved with real 3-band images, algorithm performance was significantly better on the artificial 3-band images than on the grayscale images. Such initial results encourage future research in this subject area. Specifically, based on these initial results, the GAN might be an effective preprocessing step for imagery collected in bandwidth-constrained environments.

Related Posts

  • Panchromatic to Multispectral: Object Detection Performance as a Function of Imaging Bands
  • Artificial Colorization of Grayscale Satellite Imagery via GANs: Part 1
  • Artificial Colorization of Grayscale Satellite Imagery via GANs: Part 2
  • Artificial “Multispectralization” of Color Satellite Imagery via GANs

Filed Under: Archived Projects Tagged With: models

SpaceNet 3

November 1, 2018 by christynz

SpaceNet 3: Road Network Detection

SEE ALL PROJECTS

Millions of kilometers of the worlds’ roadways remain unmapped. In fact, there are large organizations such as the Humanitarian OpenStreetMap Team (HOT) Missing Maps Project whose entire goal is to map missing areas. The SpaceNet 3 Road Detection and Routing Challenge was designed to assist the development of techniques for generating road networks from satellite imagery. The deployment of these techniques will hopefully expedite the development and publication of accurate maps.

The Challenge specifically asked participants to turn satellite imagery into usable road network vectors. For this challenge, we created a new metric, Average Path Length Similarity (APLS) for evaluating the similarity between a ground truth and proposal road network. We also created new feature labels specifically for this challenge. The new dataset consists of 8,000 km of road centerlines with associated attributes such as road type, surface type, and number of lanes. All roads were digitized from existing SpaceNet data — 30 cm GSD WorldView 3 satellite imagery over Las Vegas, Paris, Shanghai, and Khartoum.

The challenge was conducted from November 2017 to February 2018 and hosted on the Topcoder platform. It received 342 submissions from 33 challenge participants from the across the world. The code for the top five submissions were open sourced under the Apache 2 License on SpaceNet Github repository.

CosmiQ Works conducted this project in coordination with the other SpaceNet Partners: Radiant Solutions, Amazon Web Services, and NVIDIA.

Learn more at www.spacenet.ai.

Related Posts

  • SpaceNet Roads Extraction and Routing Challenge Solutions are Released
  • Creating Training Datasets for the SpaceNet Road Detection and Routing Challenge
  • Broad Area Satellite Imagery Semantic Segmentation (BASISS)
  • Introducing the SpaceNet Road Detection and Routing Challenge and Dataset
  • SpaceNet Road Detection and Routing Challenge Part II — APLS Implementation
  • SpaceNet Road Detection and Routing Challenge — Part I
GITHUB

Filed Under: Archived Projects Tagged With: datasets, models

SpaceNet 2

October 27, 2018 by rocky

SpaceNet 2: Building footprint detection in geographically diverse settings

SEE ALL PROJECTS

In building off of the results and lessons learned from SpaceNet 1, CosmiQ and the SpaceNet Partners decided to launch a second public data science challenge focused again on automated building footprint extraction from high resolution satellite imagery. While it used the same evaluation metric from the previous challenge, SpaceNet 2 included an expanded dataset featuring four new cities, Paris, France, Shanghai, China, Khartoum, Sudan, and Las Vegas, Nevada, at a higher resolution than SpaceNet 1 — 30cm ground sample distance collected by DigitalGlobe’s Worldview 3 satellite. The new dataset contained over 300,000 manually curated building footprint features across the four cities. The challenge was conducted from July 2017 to August 2017 and hosted on the TopCoder platform. The code for the top three submissions were open sourced under the Apache 2 License on SpaceNet Github repository.

CosmiQ Works conducted this project in coordination with the other SpaceNet Partners: Radiant Solutions, Amazon Web Services, and NVIDIA.

Learn more at www.spacenet.ai.

Learn More

Related Posts

  • 2nd SpaceNet Competition Winners Code Release
  • SpaceNet Labels To Pascal VOC SBD Benchmark Release Labels
GITHUB

Filed Under: Archived Projects Tagged With: datasets, models

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »
  • Projects
  • Podcasts
  • Blog
  • Resources
  • About

  • Copyright © 2019 · IQT Labs LLC - All Rights Reserved | Terms of Use | Privacy Policy

We use cookies to analyze the usage of our websites and give you a better experience. You consent to our cookies if you click on “Agree” and continue to use our website. Read our Privacy Policy for more information and to know how to amend your settings.AgreePrivacy policy