Showing posts with label GIS5100 - Application in GIS. Show all posts
Showing posts with label GIS5100 - Application in GIS. Show all posts

Tuesday, July 14, 2015

Lab 8 - Damage Assessment

For our final lab we completed a mini-damage assessment within an area affected by Hurricane Sandy. Our focus specifically was a small section of coastline along the New Jersey shore.

Aerial view of our storm damage assessment area (bounded by the pink box).
 

Damage Assessment

Aerial photos (taken before and just after the hurricane) were examined to determine the extent of the damage. As shown above, the study area was subdivided by ownership parcels; structures within each parcel were digitized and coded according to the visual extent of the damage shown in the aerial photographs. The digitized homes are represented with triangles in various colors, the coding of which is as follows:
  • red = structure was destroyed
  • orange = structure had sustained major damage
  • yellow = minor damage
  • light green = structure was affected by the storm in some way
  • dark green = no visual structural damage
The above categories were a bit subjective - the damaged and obviously not damaged structures were easy to code, but those falling between the two extremes were a bit more difficult. In general the code for "affected" was reserved for parcels that had previously only contained a parking lot, minor damage was characterized by evidence of other buildings jammed up into an otherwise stable looking structure, and major damage was reserved for buildings with partially missing sections. Field verification would be absolutely necessary to validate the above codes - the aerial analysis really represents a quick estimate based on imperfect data (poor lighting, pixilation, the inability to see wall damage, etc.).

Summary of the Damage Assessment Results

In all a total of 127 separate structures were identified and coded based on viewing the pre-Sandy aerial photos. These structures were further sub-divided into groups based on their distance from the pre-Sandy coastline. A series of distances were created using the Multiple Ring Buffer tool in ArcGIS (distances ranged from 0 - 100 m, 100 - 200 m, and 200 - 300 m).

Once the distance extents were established the digitized structure points were selected based upon their location within each buffer zone. The results were tallied by the number of structures within each distance zone as well as a count of each structural damage type per buffer zone (using the Summary Statistics tool). The results for my analysis are shown below.

Structural Damage Category
Count of Structures within Distance Categories from the Coastline
 
0 – 100 m
100 – 200 m
200 – 300 m
No Damage
0
32
44
Affected
5
11
6
Minor Damage
0
0
1
Major Damage
2
3
1
Destroyed
8
13
1
Total
15
59
53

Monday, July 6, 2015

Lab 7 - Coastal Flooding

This week's lab focused on mapping sea level rise and quantifying the effects it would have on local populations.

Projected 6 ft. Sea Level Rise in Honolulu, Hawaii
The above map shows what a projected 6 foot sea level rise would mean for a small section of coastline within Honolulu, Hawaii. The inundation area is overlaid on top of the current population density per census tract area. As shown above, quite a bit of the currently populated sections of Honolulu would be impacted.

The flood zone area was created by subtracting the total sea level rise from a DEM using the Less Than tool (values were converted from feet to meters, so 2.33 m was subtracted from each cell of the DEM - which contains elevation values). Each DEM cell area is 3 m x 3 m - so each cell covers a 9 m square area.

The resulting raster data created by using the Less Than tool were converted to a vector format using the Raster to Polygon tool. This data was then displayed on top of another raster showing total depth within the flooded areas (created also from the source DEM using the Extract by Attributes and Minus tools) and the population density data. The flood layer has been set to a 50% transparency to allow for viewing of the layers below it.

Additional demographic analysis was completed using 2010 U. S. Census data. Three variables were tracked within the flooded vs. non-flooded areas: total white population, total home owner population, and total population of persons 65 and older. Of the three demographics, the most directly affected are the home owners who represent half of the total population and are the hardest hit percentage-wise under the 6 ft. sea level rise scenario. The least affected are those who are 65 and older - these individuals represent only about 18% of the entire District of Honolulu population and less than a quarter of this population is within the area of projected sea level rise.

Tuesday, June 30, 2015

Lab 6 - Crime Analysis

This week's lab centered on how to run various types of hotspot analysis. Our focus for this use of GIS was specifically on determining crime hotspots - although I can see alternate uses in terms of generating hotspots of other sorts (statistically significant concentrations of certain types of archaeological sites or artifact types, for example).

Overlay of three different hotspot mapping techniques.
The above map shows the results of three separate hotspot analyses. The most concentrated area (shown in red) was derived from the Kernel Density analysis tool. Running this particular tool is superficially easy - but the results will vary depending on the variables used. For example, would a search radius of 0.25 miles be enough? Or up to 1 mile? I had run the tool 4 separate times, from 0.25 miles to 1.0 mile and finally settled on the results from the 0.5 mile search radius. How one displays the results can also alter the final view - the above results are based on all values that were 3 times above the mean - or in other words, the magnitude of the number of crimes per unit area (raster cell size = 100 feet, and the search radius was 0.5 miles).

The second most concentrated area is the Grid-based overlay, shown in bright yellow above. The results are based on values added to a uniform grid - in this case, 2007 burglaries per grid cell. The final results represent only the top 20% quintile, so only the most concentrated areas of burglaries are shown.

The final view also takes up the most map space - this is the results of the Local Moran's I analysis, shown in blue. While this analysis uses spatial autocorrelation in its final output it covers too much area - in my mind this might be a deterrent to decision makers who simply wish to know where to concentrate their resources.

Tuesday, June 23, 2015

Lab 5 - Spatial Accessibility Modeling

This week we explored Network Analyst and its use in spatial accessibility modeling. The term 'spatial accessibility' essentially refers to service area (such as for fire departments, grocery stores, etc.) - how long will it take to get from here to there, and what areas fall under the same time frame? This type of analysis can also be used for determining the best placement of a new store location, or, if need be, the best candidates for a store closure.

Side by side comparison of service areas.
The above map represents a simple service area analysis comparison. The schools represent the origin point, and the polygons surrounding them represent their 'service area' - broken up into minutes of drive time. The comparison shows what the loss of one of the schools would have on the overall service area network. As can been seen, only those living within the relatively populated section of Travis County, Texas would have the best access to the ACC school system. With the loss of one school this becomes even more true, and those further out may need to consider other options (if possible) for community college.

Network Analyst was utilized to run the service area analysis; a pre-existing streets network layer was used to calculate drive times. The facilities used for the analysis are comprised of the ACC locations, and the polygons displayed represent a travel radius of up to 15 minutes from the school locations. The analysis needed to be run twice - once to show the overall ACC service area, and then again to show the effects of the planned campus closure.



Monday, June 15, 2015

Lab 4 - Visibility Analysis

This week in GIS Applications we learned how to complete viewshed and line-of-sight analysis. This particular blog post will focus on one very small part of our lab, which is a visibility analysis of hypothetical street cameras at the finish line of the Boston Marathon.

Visibility analysis showing the viewshed of street cameras.



The screenshot above shows three hypothetical street cameras placed around the finish line for the Boston Marathon. The yellow circle represents my area of concern. The varying colors emanating from the cameras represent the portions of the street that area visible... unfortunately I did not include a key in my screenshot. If the key were there it would should that the lightest red color means that only 1 camera has a view of the street, the medium red means 2 cameras share that view of the street, and the darkest red means that all 3 cameras show that view of the street.

I used the following settings for the cameras when running the visibility analysis: all cameras were set to an offset height of 100 ft. The camera angles were all set to 45 degrees for the start and 135 degrees for the end. My first camera had been placed for me at the west end of the street, so to get the best coverage I placed another camera on the north side of the street by the finish line. My third camera was placed on the south side of the street east of the finish line.


I suppose that in order to better mirror reality (are all street cameras placed on streets with the exact same viewing angle?) one might need to run this analysis multiple times (so once for all cameras that have a viewing angle of 0 - 90 degrees, another analysis for those that have viewing angles of 45 - 135 degrees, etc.). Unfortunately I'm not sure how to successfully combine the results of all these viewing angle options - but I suspect it might ultimately involve some map algebra.


Monday, June 8, 2015

Lab 3 - Watershed Analysis

This week in GIS Applications we covered the topic of watershed analysis. The analysis itself requires a lot of steps, although thankfully they are fairly intuitive and standardized.

Our final map output for this week shows a comparison of a modeled watershed and the actual limits of a defined watershed - for my lab example I had focused on an area of the Anahola Stream Watershed, on Kuauai Island, Hawaii.

View of a modeled watershed versus reality.

To create my model I selected a pour point location along a stream on the edge of my DEM (shown above as the 'landscape' underneath the main map elements). I then ran the watershed analysis tool (which can be found in ArcToolbox under Spatial Analysis > Hydrology). My input layer was a previously created raster file which showed the flow direction for all stream segments as well as my selected pour point. The result was a raster file showing the watershed extent in light green above - although for display and analysis purposes I had converted the raster to a polygon file.

What is striking is how much smaller my modeled watershed is in comparison to the extent of the actual Anahola Stream Watershed. Note that only the connected stream segments are shown within my modeled watershed extent. I believe that my watershed extent is directly due to my choice of pour point location... in fact the watershed area would be even smaller had I selected a pour point further upstream.

Tuesday, June 2, 2015

Lab 2 - Least-Cost Path and Corridor Analysis

This week's lab focused on determining the least-cost paths and creating corridors using cost distance surfaces. The map layout below shows the result of a corridor analysis of black bear movement between two green zones, in this case Coronado National Forest.


View of possible bear routes between forested areas.


To run the analysis a cost surface was created - first by converting all vector data to raster, then by reclassifying the values of the input data. The cost surface inputs included the distances from roads, ranges of elevation, and types of land cover. Each cost surface was classified, and all cost surfaces were then added together using the Weighted Overlay tool. The weighted overlay results were then inverted using the Minus tool, since in our model the higher values actually represent the more desirable areas for black bears to travel within. Once complete the Cost Distance tool was run twice - one for each 'source' location. These results were then used as the input values for the Corridor tool.

Other Thoughts

This lab was actually very difficult for me - I kept hitting all kinds metaphorical walls until I finally realized what small random misstep I had taken. And the missteps were small - for example I had trouble getting my weighted overlay results to show something other than linear-like features. It wasn't until I really thought about what the data was meant to represent before it clicked what the problem was (in my case, the roads needed to show a range of distances... and to do that one needed to run the Euclidean Distance tool before running the Weighted Overlay!). Overall it was a learning experience as I believe I've gained some very useful knowledge on what it takes to complete a corridor analysis.

Monday, May 25, 2015

Lab 1 - Suitability Analysis

The first official lab of the class involved running suitability analysis models. First we focused on simple Boolean analysis (models that answer yes/no type questions), then we covered weighted overlays (applying a ranking to possible suitability results).

Two weighted overlay results.

Discussion

The above map shows the results of two different weighted overlay results. Both maps were derived from the same input data (ranked highway, river, slope, soil, and landcover type) but the overall importance each dataset was weighted differently using the Weighted Overlay tool. The map on the left shows the model results if all criteria are given the same weight (in this case, each criteria weighted 20% out of 100), and the map on the right shows the results where some criteria (such as slope at 40%) are given more weight than others (such as roads at 10%).

As you can see, with the variable weight scenario map on the right the resulting classes were not enough to fill all 5 categories. What the category range means is that areas classed as being 1 are least suited (in terms of the overall criteria and their given weights) and areas classed as being 5 are most suited. Under the alternative scenario the best results are average suitable locations only.

Quite a bit of data processing went into the production of the above maps. The above maps show raster data, not vector data - so those layers that were shapefiles were converted using Euclidean Distance, then reclassified to show the desired number of classes/rankings. The raster data was also reclassified the same way.

A benefit of the weighted overlay over simple Boolean analysis is that one can see the shades of gray within the data... although as shown on the map above, it does matter how you initially class and rank your data. The results of a Boolean analysis are very easy to interpret but weighted overlays are a whole other beast - the above results are a bit trickier to understand, and a solid methodology is required to make sense of it all.