Tuesday, September 29, 2015

Lab 5a - Intro to ERDAS Imagine and Digital Data 1

This week's lab was mostly a tutorial on how to navigate within ERDAS Imagine.

Detail view of land cover data set within Washington State.
The functionality of ERDAS Imagine is great - it makes sense on the user's end, and one doesn't really need to drill down into various properties just to change one little thing.  Being able to mess with each color band was neat, and helped to make certain aspects of remotely sensed imagery processing a little less abstract.  However some drawbacks to the program do seem to be its map making capabilities (there are known bug issues).  The above map was finished using ArcGIS.

The view above is an inset of a much larger classed raster image.  It took some tries to get the correct view extent to transfer over to ArcGIS... even though this step was spelled out in the directions I suppose I still needed to do my own trial and error to learn this part! 

Lab 5 - Vehicle Routing Problem

Screen capture of a solved vehicle routing problem.
This week's lab focused on the vehicle routing problem, and how ArcGIS Network Analyst extension solves for this problem.  The screenshot above shows the results of the analysis, which is a series of 22 routes with various delivery stops.

Solving for any vehicle routing problem (VRP) can become a bit complex as behind-the-scenes work just to build up the truck and employee cost information, customer information, and depot information can be very extensive.  This does not even begin to include the use of a suitable road network on which to model the routes, or the definitions of where each truck/route can go... happily the great majority of this work was completed for us prior to beginning this lab.

The solved routes in the screenshot above utilize all 22 trucks in a distribution company's fleet.  Each truck was assigned to a certain "route zone" meaning that they ideally do not leave their zones to make deliveries in other areas.  However a few trucks were allowed to stray outside of their routine service zones in order to be part of a more profitable solution for this particular company.  Why have a service zone?  In order to provide continuity for the customer by having the same delivery man, instead of random unknowns doing the drop-offs. 

The one big issue with making service zones overly strict is this: the VRP solver can miss the obvious 'common-sense' solution in favor of an optimal solution that adheres to strict parameters.  Prior to the screenshot above our initial VRP route had been solved with such strictly defined service zones.  As operating costs are also an important factor, the VRP solver had provided only 14 routes (meaning 14 trucks) to shoulder the burden of delivering 128 orders across southern Florida.  This meant a lot of overtime, and in the end there were 6 orders left unfulfilled and several others that would have been delivered outside of regular business hours.

By tweaking a few items (mainly allowing a few trucks to make deliveries in adjacent delivery zones) the VRP solver was able to assign a truck route to deliver all of the orders, with only 1 order (out of 128) being made after normal business hours.  The overall revenue generated with the modified route also went up ... along the with cost to operate, but the operating costs were comparable to the rise in revenue.  Assuming that a satisfied customer is one that does repeat business, then choosing the optimal route solely on the lowest operating cost simply does not make sense - one needs to also make sure that the customer's needs are being met.  This was accomplished with the modified VRP route.

Tuesday, September 22, 2015

Module 4 - Ground Truthing & Accuracy Assessment

This week was all about 'ground truthing' classification maps and evaluating the overall accuracy of the classes assigned.  Using the Land Use / Land Cover map created for Module 3 we 'ground truthed' our own classifications.  Overall my map from last week was 67% accurate... ouch!

The above map had been 'ground truthed' for accuracy - overall accuracy was found to be 67%.
Since physically visiting Pascagoula, Mississippi was not an option for me (or most of my classmates) we used Google Maps as our higher grade dataset, then visually compared pre-selected sample locations against Google Maps. 

The sample locations were derived using a stratified random scheme, with at least 2 points per class type.  Most classes had 3 points, and two classes (which did not have very large representation on the map) had only 1 point.  The largest class contained 4 sample points.  Sample locations were plotted using a program called the Sampling Design Tool.  The program is available as a download through ESRI and was created by NOAA's Biogeography Branch.

After checking Google Earth against the sample point locations on my original LULC map, I have some basic observations:

  1. My original cropland designation apparently was just grass mowed with a distinctive pattern... I'm still not buying it, and believe instead that the land use has changed over the years.
  2. My beach was apparently someone's home - they live right on the water and seem to have little to no landscaping.  That was not readily apparent at my MMU of 1:4,000.
  3. I should have set 3 sample points per class, and then manually messed with their location.  Many of the sample points were bunched up for some reason.  Also, classifications that did not cover as much map space should have had at least 1 sample point taken away.  But the largest classes didn't really need to absorb those 'excess' sample points.

Monday, September 21, 2015

Lab 4 - Building Networks

Results from running a route on a network with historic traffic trends added.
This week's lab had us building a road network with the ArcGIS Network Analyst extension.  The road network was built twice - once with historic traffic pattern data added, and once without. 

The additional functionality was added during the network building process by electing to model traffic patterns.  Associated with the roads dataset is a set of two tables that contain historic traffic pattern data, with such information as what percentage of normal (ideal) driving speeds a particular road segment has at certain times of day, for every day of the week.  Once this data was linked to the network the overall results became more realistic. 

Not shown above are the various results obtained from making minor changes to the route start and stop times, and on what day of the week the route was created for.  The resulting route travel times changed by only a few minutes... and surprisingly so did the overall distance traveled.  These route changes were so small that I could not detect them on my network, only on the route properties screen (such as in the example shown above).

Thursday, September 17, 2015

Lab 3 - Land Use / Land Cover Classification Mapping

This week we got to try our hand at classifying an aerial photograph.  Using a Land Use Land Cover classification system, which was developed in 1976 by Anderson et al. for the USGS, we classed a single aerial photo to the second level.  My minimum mapping unit (MMU) was at 1:4,000. 

Aerial map showing Level II Land Use and Land Cover Classification.
What does all this mean, exactly?  To start, land use is a bit different from land cover.  Land use shows human-based uses of the landscape (urban areas, agriculture), and land cover shows primarily natural settings (forests, water).  To classify a map one first needs to differentiate between farmland, urban areas, forested areas, etc.  That is a primary classification level.  To map something at the second classification level is to specify if, for example, the areas within an urban area are for residential use versus an industrial use.

The map above shows various classifications within a small section of Pascagoula, Mississippi.  For consistency I had digitized my classification polygons at the 1:4,000 level only.  While I may have zoomed in or out to double check on a classification type or my overall location on the aerial photo, when I digitized an area it was always at 1:4,000 (also known as my MMU).

Completing this map was a bit rough at times - occasionally I felt like I was adding to much detail, and other times like I wasn't adding enough.  Since a Level II classification is meant to be a bit coarse perhaps my biggest lesson was in learning to let go of the details!  For example, a high-tension wire crosses through the lower left of the aerial photo.  This was not mapped in mainly because to do so would have been very difficult given the level of detail that it would require... if my MMU was a bit larger then perhaps it would have been possible, however I would also probably still be working on this map!

Reference:
Anderson, J.R., and E. E. Hardy, J.T. Roach, R. E. Witmer
1976     A Land Use and Land Cover Classification System for Use with Remote Sensor Data.  Geological Survey Professional Paper 964.  United States Government Printing Office, Washington D. C.

Monday, September 14, 2015

Lab 3 - Determining the Quality of Road Networks

One way to measure the quality of a road network is to evaluate its overall completeness.  The idea behind this is that the more roads mapped within a given network, the greater the likelihood that the network has better coverage of a given area.  This was the focus of our lab this week.

Do note, however, that just because a network has more coverage does not necessarily make it more spatially accurate... those lines still need to be in the right place!  Our lab focused only on comparing the completeness of one road network against another for the same area - testing the spatial accuracy of a road network using points was covered last week (Lab 2).

Technical Notes

The first comparison metric is exactly what one might think: we totaled the collective line segments lengths per road network, and compared the results.  At first glance the TIGER Roads data is more complete than the Jackson Co. street centerlines data.

After determining these lengths we then needed to break down just how complete each road network was per grid cell.  We overlaid a grid (a series of square polygons) covering the whole of Jackson County, then split up the road network polylines by grid cell.  This was done using the Intersect (analysis) tool. 

Once the road segments were separated their respective lengths per grid cell were then updated using the Calculate Geometry tool.  The grid cell data was then joined to the road segments - this made it easier to obtain the overall road length totals per grid cell (per road network). 

The TIGER Roads were also shown to be more complete in terms of overall length per grid cell... but the Jackson County street centerline data is more complete in more grid cells than the TIGER Roads data.  The results are depicted in the map below.  A choropleth map using (Jenks) Natural Breaks was created, and the results are explained in terms of percentage variance for the Jackson County street centerline network from the TIGER roads data.



The final result, explained in terms of how it relates to the Jackson Co. street centerline data.

Tuesday, September 8, 2015

Lab 2 - Visual Interpretation


This week’s lab focused on the elements of aerial photo interpretation.  To accomplish this we viewed two separate photos, then identified various elements within each. 

Map 1 - Comparing tone and texture.

The first map is a study in texture and tones.  We were to identify various tones (ranging from very dark to very light) and various textures (ranging from very fine to very coarse).  I found the tones to be a bit of a challenge, as to me the gradation of very dark to dark was a bit subjective.  Never fear, I was able to tell the basic difference between light and dark!
 
Map 2 - Picking out features based on specific criteria.

The second map required that we identify features within the photo that correspond to a specific criteria: association, shadows, shape and size, and pattern.  It was interesting to see just how many findings based on association I was able to make… I didn’t map all of them in as the assignment only called for two examples, but there are quite a bit as can be seen above.

Monday, September 7, 2015

Lab 2 - Determining Quality of Road Networks

This week's lab continued in the theme of spatial accuracy and data quality.  Using the National Standard for Spatial Accuracy (NSSDA) statistics we compared the accuracy of two street datasets in Albuquerque, New Mexico.

Street test point locations in Albuquerque, New Mexico.

The first data set consisted of streets data provided by the City of Albuquerque - this was our 'truth' layer.  The second data set was a portion of the USA Streets layer provided by ESRI.  Both data sets were compared at 55 different test point locations.  Test point locations were selected using an ArcGIS Desktop Add-In called the 'Sampling Design Tool'.  These sample locations were then moved to the nearest four- or three-way intersection on the City of Albuquerque data.  Sample locations were discarded if the closest City of Albuquerque streets intersection did not also have a corresponding ESRI USA Streets intersection (the ESRI USA Streets layer was not as complete as that provided by the City of Albuquerque).

The reference layer was then hand digitized using aerial orthophotos of the study area (outlined in blue above). All reference points were taken in what can be considered the 'centerline' of the street, as viewed in the orthophoto image.  The two data sets were then compared to this reference location using an Excel table to compute the Euclidean distance difference between the reference x and y locations and the streets layer under comparison.  After following the NSSDA worksheet (the details of which can be viewed here: https://www.fgdc.gov/standards/projects/FGDC-standards-projects/accuracy/part3/chapter3) the results were as follows:


USA Streets Positional Accuracy:  Using the National Standard for Spatial Data Accuracy, the data set tested 264.7 feet horizontal accuracy at 95% confidence level.

Albuquerque Streets Positional Accuracy: Using the National Standard for Spatial Data Accuracy, the data set tested 51.5 feet horizontal accuracy at 95% confidence level.