Systematic Street View Sampling: High quality annotation of power infrastructure in rural ontario

Abstract

Google Street View and the emergence of self-driving vehicles afford an unprecedented capacity to observe our planet. Fused with dramatic advances in artificial intelligence, the capability to extract patterns and meaning from those data streams heralds an era of insights into the physical world. In order to draw appropriate inferences about and between environments, the systematic selection of these data is necessary to create representative and unbiased samples. To this end, we introduce the Systematic Street View Sampler (S^3) framework, enabling researchers to produce their own user-defined datasets of Street View imagery. We describe the algorithm and express its asymptotic complexity in relation to a new limiting computational resource (Google API Call Count). Using the Amazon Mechanical Turk distributed annotation environment, we demonstrate the utility of S^3 in generating high quality representative datasets useful for machine vision applications. The S^3 algorithm is open-source and available at github.com/CU-BIC/S3 along with the high quality dataset representing power infrastructure in rural regions of southern Ontario, Canada.

Publication
2018 15th Conference on Computer and Robot Vision (CRV)

Related