What is the best way to validate mapping when ground-truth data collection is not possible because the study area is logistically difficult and potentially dangerous?
Without absolute true value (ground truth), we can use officially released or highly acknowledged data as nearly truth candidates for nearly truth-observation comparison.
Without ground truth and candidate truth, we just have several data set of nearly-equivalent precision, then we can use all kinds of data set to do statistics for consistency/correlation analysis.
In my experience, most people use Google Earth and Google Maps to select ground control points, using the highest resolution possible and human interpretation.
One possible approach would be to look at pattern metrics for a proxy landscape and see how your classified map (I am assuming you are using some kind of automatic classification of remote sensing data) compares. This is likely to be quite rough, but, for example, statistics like clumpiness, edge density, shape index etc (lansdcapemetrics package in R) may give you an approximate idea of how your classification algorithm is performing in general terms i.e. if it is producing a realistic landscape or not. Not so useful if you want to know how well your algorithm is identifying particular land cover types however.
Thank you very much for your answers. I need training samples and other geospatial information for the land cover and land use types since the study area lacks established government data on the issue. Seems like manual reference data collection in Google Earth is the best solution.
I think your question is interesting, however requires an understanding or definition of several terms you use and the context within which you use them for one to effectively answer. What type of mapping are you dealing with? contours? elevations? land type cover? etc. What do you mean validate? How do you quantify the accuracy of the validation? What is the final scale of your product?
From a surveying example and point of view, some maps created have coordinates, contours, elevations and whatever features that have been mapped. Map validation in this case means checking whether a scaled point on the map would have the expected coordinates, elevation, fall within the right contour interval, have the same relational distance between measured physical features to some acceptable accuracy. In this sense, in your context, map validation is not possible.
Typically you need something more accurate with which to check that which you have developed, otherwise map validation is meaningless. Would Google Earth or Google Maps or satellite data suit your purpose? If need be how do you deal with map projection and coordinate conversion issues?
I hope this sheds more light and brings into perspective some of the issues involved, you should consider.
As Dr. Keith and Dr. Sharma said, mostly I used Google Earth and Maps. Also, you can check using the old maps to check the accuracy. In some old maps, it show the land use and land cover to some extent in case of Myanmar. I am not sure which part of Myanmar in your case.
I really can't find any old LULCC maps for Myanmar except for the one produced by UNEP which is not very detail at township level. Can you please provide me some references? I am looking at Rakhine, Kachin and Shan. Thank you very much.
There are many land use/cover datasets out there that might fit your study area, depending of course on how detailed you want the information. Two European Space Agency datasets, Globcover and CCI-LC certainly cover Myanmar. A series of time periods is available from 1990s to 2014 or 15 I think. Look also at the USGS datasets (e.g. EROS).
The best option would be having at least some distant help from locals and using/comparing google earth/maps historic data from different time periods (and maybe some crowdsourced mapping platforms)
From my research experience, I often make use of thematic maps of the area (where I could not perform the ground truth data collection), and then perform a wide cross-validation analysis using several imagery sources covering the area.
The value of OpenStreetMap Historical Contributions as a Source of Sampling Data for Multi-temporal Land Use/Cover Maps
https://www.mdpi.com/2220-9964/8/3/116
The study used OpenStreetMap data history to generate LULC datasets with one-year timeframes as a way to support regional and rural multi-temporal LULC mapping.
If the changing process in your study area is not very rapid, you may use google earth to validate your classification map. Also, you can sign up for planet lab website to have free access of very high resolution images with spatial resolution from 0.7 m to 3 m for 14 days.
For the first time, the study area was not only affected by insecurity but also by Ebola virus disease. In this context, there were areas on and areas strongly advised against. I collaborated with the health authorities to establish a map of the problem areas because they too were confronted by the security besides the Ebola response. So, I had to maximize the sample in the favorable areas. But to ensure the representativeness of the samples, I had to collaborate with an expert from the region.
The other case, it was very dangerous to do the field. So, I relied on a reliable reference. I mapped the land cover on the date as this reference. Then I relied on the areas that did not change land use between the year of the reference and the year on which the date I wanted. Also, I did meticulous work to understand the different show signatures and their corresponding classes. To do this I relied on my map on the reference date, the map reference map and data from Google Earth.
If Ground truth validation is not possible you can rely on UHD 4k Google earth images. It will give the real picture of the given area with fine resolution.
Do multiple analysis which have same results then analyse your results if every analysis shows
Correlation with eachother then your final map or is validate. And related research articles can also be a best source of reference for validation of your results.
Without absolute true value (ground truth), we can use officially released or highly acknowledged data as nearly truth candidates for nearly truth-observation comparison.
Without ground truth and candidate truth, we just have several data set of nearly-equivalent precision, then we can use all kinds of data set to do statistics for consistency/correlation analysis.
Here the link for our paper we tried to overcome on the shortage of ground Soil moisture data over Africa maybe useful for you if you need more discussion welcome:
Article Spatial Evaluation and Assimilation of SMAP, SMOS, and ASCAT...
First, you can use old maps with different scales and try to verify your new map, at least you will find some features without change you can use them as benchmarks.
Second, by using multiple data sources and multiple analyses to see you will get the same or correlated accuracies.
Third, check the relative accuracies like use datasets at different time series and check the differences.
For this purpose manual or synthetic methods can be useful. Human annotated or the real location marks can be helpful. Training and testing of ground truth data could be beneficial too.
Good discussion. One of the possible mechanisms is using google earth images in image history to verify the images, because the resolution is very high.