Showing posts with label GIS 5100. Show all posts
Showing posts with label GIS 5100. Show all posts

Friday, August 9, 2024

M6 - Post 2 - Corridor Analysis

Welcome back to part 2 of this week's discussion on suitability analysis and least cost pathing. This part picks up with a look at corridor analysis. Where it builds from the previous part is in creating suitability layers, applying them to a weighted overlay, and then building a cost distance model. Based on the following workflow, the below map is of ideal black bear movement areas between two regions of the Coronado National Forest. Based on the bears known habitats, various land cover types, and roadways, each layer was given a suitability factor. Those factors were then weighted, with landcover being primary at 60%, roads and elevations at 20% each. Then a cost-distance corridor was built based on these factors. Those areas are colored by ideal movement areas. Underlying the scene is a hillshade analysis and terrain relief generated from a digital elevation model. 














The red corridor is the ideal movement corridor, and represents only a 1.1 multiplier to the total suitability result. This means that there was an ideal score, then this multiplier applied to it to generate that movement area. The orange is a 1.2 multiplier, and yellow area is a 1.3 multiplier. I think that these are important because when you extend out to the 1.2 and 1.3 multipliers you start to see secondary corridor bands like the smaller orange corridor. While the red is ideal, this still shows that there may be alternative considerations in play. The raw data however shows that the entirety of the region between the two closest portions of the Coronado regions would be viable. The highlighted areas are just Most viable. 

While certainly a lot of work, with multiple levels of iteration in the products, this was a worthwhile investment of time to understand how these tools work, and build off each other. Thank you.


v/r

Brandon 

Wednesday, August 7, 2024

M6 - Post 1 - Suitability Analysis

 This is the first part of a two post final module for GIS 5100, Applications in GIS. 

This last module combines many of the skills that have been acquired, strengthened, and challenged during this course. Specifically, we are working through a significant amount of raster data manipulation to generate a suitability analysis. Under the auspices of being a budding GIS analyst for a property developer, my task was to take five factors, transform data relating to them, and generate a weighted overlay, to provide a suitability assessment for the subject area. 

The subject categories are: Land Cover, Soil Type, Slopes, Streams, and Roads. 

Land Cover was already in raster format, but required reclassification to provide favorable weight to agricultural areas and meadow or grasslands. 

The Soil analysis having previously been completed was a polygon layer requiring conversion to raster, and then adjusting the suitability for arability. 

A DEM was provided so that I could transform it into a Slope raster, and then heavily weight mild slopes. 

For streams, while water is a desirable feature, it was weighted by distance away from it. 

Roadways are key for accessibility and as such heavily weighted based on distance from a roadway out to 1 mile. 

All of these datasets varying factors were given a value of 1 - 5, with 1 being least suitable and 5 being most suitable. This means, that each raster Cell was provided a value on this scale based on the real factor its source raster represented. Then the weighted overlay tool provides a composite score by cell, and based on the weight applied to each factor. 

In the case of the map comparison below, the left pane has all of the five factors being weighted equally. The right pane provides a variable weight as depicted. 











The biggest takeaway is that by weighting the factors differently you can vastly change the amount of suitable or unsuitable area that you are working with. Also remember that the suitability factors created by the Weighted Overlay process must end up in a whole integer. Normal rounding rules apply for each cells value. A cell weighted at 4.29 and 3.75 will both end up being a 4. etc. 

Now stay tuned for part 2 coming up next. 

Saturday, August 3, 2024

M5 - Damage Assessment

This module involved a holistic look at 2012's Hurricane Sandy, from path to shore, and a damage assessment of some of the aftermath. It starts with translating an excel file containing latitude/longitude, strength, wind speed, and time data for the hurricane across its week-long existence. With the course translated from data points to a point feature class, then converted the points to a line. The culmination of that transformation of data is below. 


After the track was established, we switched our focus to the actual damage itself. For this assessment we use a before and after image for the appropriate study area. With the study area identified I digitized points for each of the structures and built out an attribute table combined with predefined information domains. 


Above is a look at the study area in the post-hurricane scene. While it's not a full map with labeling, the Red and Black triangles indicate total destruction, red highlights major damage, orange, minor structural damage, and yellow represents affected structure. Nearly everything here is affected in some way, but several structures appear intact from this view, those are the green circles.   

From there, part of the analysis turned to looking at damage rates in 100-meter zones. This allows us to extrapolate damage predictions for other areas. Now there are several variabilities, and any given adjacent area may have more or less destruction for a multitude of factors. 


In the above, the line in the center of the buffer is the baseline. Its adjacent to the study area, and shows a visual depiction of 100m bands the study area houses fall in. 

One of the other aspects of the module was to explore external GIS tools, like survey123.arcgis.com which allows for custom survey creation that responders or local citizens can use to submit information. UWF Members can view an example here: https://arcg.is/1CeafO

This comprehensive analysis was definitely time consuming. But it is amazing to see all of the data come together in this way. Thank you.


v/r

Brandon


















Saturday, July 27, 2024

M4 - Flood Analysis

 This week involves coastal flood analysis. Both of the below maps look at damage from severe storms or storm surge. The first of which looks directly at pre and post Hurricane Sandy elevation data from 2012 in New Jersey. That analysis utilizes change detection to show where damage areas are, where debris accumulation or shoreline accretion is taking place. The second map is of Naples Florida and is solely based on if there was a 1 meter storm surge what properties would be impacted. The crux is that two different elevation models are being compared to take the analysis a step further. All of this is helping to better understand coastal flood assessments, and how elevation models can be used to delineate coastal flood zones. Numerous raster analysis and modifications were undertaken to process the various LiDAR and DEM data. Followed by attribute table manipulation to determine some accuracy statistics between the two mentioned elevation models which will be discussed more below. 





















The map above is essentially a hot and cold heat map, where hot is areas of high negative change. This means for example, a location where a building previously stood which is now gone. The opposite of this is the blue areas which indicate a positive change in that location. This for example could indicate areas where debris has accumulated, or sand has piled up. The information on the map also discusses some of how it came to be, but in simple terms it is the combining of a before raster with a post raster, specifically isolating the elevation change.

Now onto the storm surge map. 



 








The map above looks at a comparison of USGS DEM derived from traditional photogrammetry against one using a higher-resolution LiDAR dataset. Each dataset was transformed to only show areas where it predicts a 1 meter surge impact. The LiDAR layer is over the USGS layer, but both have areas where the other is not a factor. Because of the scale of the Naples and Marcos Island scene I wanted to provide a better look at how the two data layers are overlapping or not, so I provided an equally sized inset of Naples. There you can see representative examples of each of the impact types. From those buildings not impacted, to those only represented on one DEM dataset, to those represented on both datasets. In this case the most true representation of buildings impacted would be those that are Red for "both" and those that are blue, for LiDAR only. The red and blue together would be the most likely to be impacted. The orange USGS only, would likely not be impacted as its dataset was more coarse when analyzed. 

These are some excellent tools to determine flooded areas from elevation data, and impacted facilities. Thank you. 


v/r

Brandon 

Friday, July 19, 2024

M3 - Visibility Analysis and ArcGIS Online

    This week took us to ESRI direct, utilizing ArcGIS Online and 4 different ESRI hosted training sessions. The theme? Visibility analysis. This week carries forward with our look at LiDAR last week, by continuing to use some similar products, working with elevation layers and overlapping features of varying heights, shapes, sizes, make up types (points, lines, polygons) to work with different portrayals of 3D information. The modules themselves were:  

  • Introduction to 3D Visualization 
  • Performing Line of Sight Analysis 
  • Performing Viewshed Analysis in ArcGIS Pro
  • Sharing 3D Content Using Scene Layer Packages 

    These modules all served to highlight how helpful 3D data and information presentation or visuals can be in identifying patterns not seen in 2D. They aid in providing new perspective of vertical content, and provides an extra sense of realism with the ability to navigate and explore in a 3D manipulable environment. 

    One of the key takeaways was in understanding the difference between a local scene and a global seen. They both typically revolve around the scale of information you are working with, but more explicitly in how they convey real-world perspective vs real-world context. One key difference being if the curvature of the earth is a factor in your information presentation or not. 

    We continued to work with LAS data, DEM's or other forms of elevation layers, but also with Z-values which provide the third dimension for points, lines, and polygons. 

    For points, you could add a height extrusion, such as showing how tall trees are, or lamp posts. For lines, you could establish a standard height above ground for a fence line, or make a particularly uniform elevation boundary. Polygons with Z information gain new dimensions as the shapes are shown. From a square or circle in the 2D to a full 3D building structure. 

    Other analyses can then be done with a fully extruded 3D scene. Line of sight and viewshed analysis was a big part of this weeks training. These revolved around constructing sight lines, then building lines of sight. Whats the difference you ask? Constructing sight lines involves an observation point with known elevation and a target point with known elevation and generating a line between the two. Then, the line of sight utility is used to determine if there are any obstacles from the observer to the target. Buildings, terrain changes, trees or foliage features, etc can all block line of sight. A viewshed takes this a step further in being able to establish what is in view based on what elevation and field of view parameters.





















    To take it a step further and apply it to the real world, look at the news this week, there are all sorts of graphics being modeled and analyzed after former President Trump was shot at. Building models, sight line distances, camera vantage points, obstruction analysis. All going on in the real world this week is the exact substance of this module. 

    Regardless of the ongoing real world applications, this module was culminated in creating a shareable scene layer package. An example of the type of deliverable generated for this is below. 



















    Overall, these are all hugely relevant skills for GIS applications. They allow you to explore your data more in depth and provide much more immersive presentations. Onto the next week. 


V/r


Brandon



Sunday, July 14, 2024

M2 - Biomass Density Analysis

This is the first of two weeks working with Light Detection and Ranging (LiDAR). This week we are working with data acquired from the Virginia Geographic Information Network (VGIN). A LiDAR point cloud was acquired for one of the park and valley areas in the Shenandoah National Park.

With the singular point cloud several different products and transformations were made to derive the biomass density map below. The point cloud itself (seen in the second image) is a 3d feature layer as height is involved with each point in the cloud. The primary transformation involved deriving ground and elevation data to generate a Digital Elevation Model (DEM), and a Digital Surface Model (DSM).

Interestingly, there is quite a sequence of tool use to generate these deliverables.
- LAS to Multipoint > Point to Raster > Is Null > Con > Plus > Float > Divide

Note that this sequence is either transforming the data type, as in the LAS to point or point to raster. Or it is an adjustment to the cell values in the case of the remainder of the string. The Divide tool is different as it is a combination of the ground and surface data which provides for our final output below. 











The biomass density map above shows the cumulative height by pixel for the entire scene. The DSM and DEM scenes have been averaged together to give each cell a 0 – 1 value. This allows the higher values to show denser vegetation and the lower values to show less height or less dense areas. This is helpful to forresters because it can indicate areas of highest / densest brush. From the image here you can see that these areas follow the contours of the valley in the north / north east portion of the scene. The scene can also highlight the difference between lower scrub compared to the high trees, areas where plains may be compared to tree thickets. 














As described in the map above you can see the LiDAR point cloud which was then used to transform into the raster based DEM on the left. While all of the images above are the exact same area, they are transformations or translations of this point cloud. 

This was an interesting lab with significant tool usage, but it is overall interesting to see how it can be transformed from raw data to a useable product. Thank you.


v/r

Brandon

Thursday, July 4, 2024

M1 - Crime Analysis

    Have you ever wanted to be a Crime Stopper? While you might not reach that milestone with this module, but you can certainly become a better crime analyzer! Analyzing crime through spatial correlation and heat mapping is the name of the game in this module. 

    The overall goals of the module involved gaining familiarity with GIS analysis tools and processes that can help in crime analysis. These allow us to convey and illustrate crime rates, and help derive spatial patterns based on socio-economic characteristics. The same data set was used in three different processes to derive the outputs below. 

    Specifically, a grid cell analysis, kernel density analysis, and Anselin Local Moran's I analysis were all performed with 2017 homicide data for the greater Chicago area. The below are not fully finished maps that would otherwise incorporate our traditional map elements. This would include an actual title, legend, scalebar,  north arrow, and other potentially enhancing information. They are designed to showcase the same data in three different ways. They all highlight spatial clustering for the homicide data, or where the data suggest the highest rates or prevalence occurred in the subject year. A brief rundown of each is below.

Grid Cell Method:

    Happily, the ½ mile by ½ mile grid feature was provided for the study area. This grid was spatially joined with the homicide point feature for 2017. From there, only grids that actually had a homicide occurrence were desired, so they were selected by attribute.

    Of those cells with homicides, this study called for focusing on the top 20%, which resulted in 62 individual cells. That exported feature class was then dissolved to a singular feature. This was for visual presentation, not statistical relevance. 




Kernel Density Method:

    The Kernel Density tool takes a point feature class and transforms it to a raster output using a “magnitude-per-unit area calculation. For this specific tool, I utilized an output cell size of 100 sq feet, and a search radius of 2,630 sq ft or approximately ½ mile, the output of which generates density based image.

    From there, the mean value (2.76) is established, and to highlight the most dense areas I used three times the mean (6.71). These then, were the most homicide prone regions of Chicago for that year, 2017. From there, the image was reclassified using this breakdown into 2 classes, below 3 times the mean, and above. That output was then transformed via the raster to polygon tool. Because the output had 2 values, we only wanted the one that represented above three times the mean. A select by attributes process was used to gain only those areas above three times the mean, and exported as a standalone feature for display. 




Local Moran’s I:

    This process utilizes a normalization of the census tract housing data combined with the homicide data. As in previous processes, a spatial join was performed with the homicide feature and the census tracts. From there, a new data field was created to calculate the number of homicides per 1000 households.

    The Anselin Local Moran’s I tool, was then used to identify statistically significant clustering and outliers. Specifically, we want to identify areas with a high homicide rate in close proximity to other areas with a high homicide rate (HH areas). As opposed to other combinations that are high / low, low / low, low / high, respectively. Once the High/High areas were identified, they were selected using a SQL query, and exported to their own feature class. This class is then likewise dissolved into a single feature. 


    Overall, the provided instructions ensured that there weren't too many issues during the processing of this module. some of the most time consuming parts were in comparing the tables and validating the fields for the various joins that were used. Then selecting the correct inputs for the various SQL queries and attribute selection actions. It is quite interesting how the same data can be aggregated and presented in multiple different ways to draw different conclusions. Thank you.


v/r

Brandon













Saturday, June 29, 2024

Intro to GIS Applications

 

Hello everyone! 

Welcome to the start of another course, GIS Applications, Masters edition. As a recap, I am Brandon Deusenberry and It's great to work alongside a strong group of budding GIS professionals as we tackle a new set of skill challenges. 

For the basics, I am early still early in the Masters Program, with a projected graduation of Fall '26. It cant get here soon enough! I am a full time active duty Air Force member. This July is my 21st anniversary in the service. It absolutely does not feel like it has been that long. I am a full motion video, remote sensing camera operator by trade. But nowadays my work time is more dedicated to leading and mentoring the enlisted members in my charge. I have a family with 4 crazy boys, each a different challenge, but all awesome. 

Below is a link to my story map, which has both a bit about my history and some places I have been, but also some of our recent summertime adventures. Between full time work, family time, and school, I don't have as much free time as I would like. But I still try to get together with my DnD group, and or play video games with friends where able. 

Hopefully, about the time I am done with the Master's program, I will be wrapping up my military career. From there I will combine both worlds' skills and move forward into a new one. While I can zone out and enjoy working on the technical aspects of GIS, I think I would much prefer a position that lets me help organize and lead a team of GIS professionals. But we will see what happens. 

This intro also calls for some adjectives that I think describe me, so here are those:
silly, sarcastic, gruff, calm, thoughtful, useful

But I'll let you work with me and figure out some of your own, or if those are accurate. 

My Story Map: 
https://arcg.is/00vnWC0

Special Topics - Mod 3 - Lab 6 - Aggregation and Scale

 Hello and Welcome back!  My how time has flown. It has almost been 8 weeks, and 6 different labs. There have been so many topics covered in...