Saturday, July 27, 2024

M4 - Flood Analysis

 This week involves coastal flood analysis. Both of the below maps look at damage from severe storms or storm surge. The first of which looks directly at pre and post Hurricane Sandy elevation data from 2012 in New Jersey. That analysis utilizes change detection to show where damage areas are, where debris accumulation or shoreline accretion is taking place. The second map is of Naples Florida and is solely based on if there was a 1 meter storm surge what properties would be impacted. The crux is that two different elevation models are being compared to take the analysis a step further. All of this is helping to better understand coastal flood assessments, and how elevation models can be used to delineate coastal flood zones. Numerous raster analysis and modifications were undertaken to process the various LiDAR and DEM data. Followed by attribute table manipulation to determine some accuracy statistics between the two mentioned elevation models which will be discussed more below. 





















The map above is essentially a hot and cold heat map, where hot is areas of high negative change. This means for example, a location where a building previously stood which is now gone. The opposite of this is the blue areas which indicate a positive change in that location. This for example could indicate areas where debris has accumulated, or sand has piled up. The information on the map also discusses some of how it came to be, but in simple terms it is the combining of a before raster with a post raster, specifically isolating the elevation change.

Now onto the storm surge map. 



 








The map above looks at a comparison of USGS DEM derived from traditional photogrammetry against one using a higher-resolution LiDAR dataset. Each dataset was transformed to only show areas where it predicts a 1 meter surge impact. The LiDAR layer is over the USGS layer, but both have areas where the other is not a factor. Because of the scale of the Naples and Marcos Island scene I wanted to provide a better look at how the two data layers are overlapping or not, so I provided an equally sized inset of Naples. There you can see representative examples of each of the impact types. From those buildings not impacted, to those only represented on one DEM dataset, to those represented on both datasets. In this case the most true representation of buildings impacted would be those that are Red for "both" and those that are blue, for LiDAR only. The red and blue together would be the most likely to be impacted. The orange USGS only, would likely not be impacted as its dataset was more coarse when analyzed. 

These are some excellent tools to determine flooded areas from elevation data, and impacted facilities. Thank you. 


v/r

Brandon 

Friday, July 19, 2024

M3 - Visibility Analysis and ArcGIS Online

    This week took us to ESRI direct, utilizing ArcGIS Online and 4 different ESRI hosted training sessions. The theme? Visibility analysis. This week carries forward with our look at LiDAR last week, by continuing to use some similar products, working with elevation layers and overlapping features of varying heights, shapes, sizes, make up types (points, lines, polygons) to work with different portrayals of 3D information. The modules themselves were:  

  • Introduction to 3D Visualization 
  • Performing Line of Sight Analysis 
  • Performing Viewshed Analysis in ArcGIS Pro
  • Sharing 3D Content Using Scene Layer Packages 

    These modules all served to highlight how helpful 3D data and information presentation or visuals can be in identifying patterns not seen in 2D. They aid in providing new perspective of vertical content, and provides an extra sense of realism with the ability to navigate and explore in a 3D manipulable environment. 

    One of the key takeaways was in understanding the difference between a local scene and a global seen. They both typically revolve around the scale of information you are working with, but more explicitly in how they convey real-world perspective vs real-world context. One key difference being if the curvature of the earth is a factor in your information presentation or not. 

    We continued to work with LAS data, DEM's or other forms of elevation layers, but also with Z-values which provide the third dimension for points, lines, and polygons. 

    For points, you could add a height extrusion, such as showing how tall trees are, or lamp posts. For lines, you could establish a standard height above ground for a fence line, or make a particularly uniform elevation boundary. Polygons with Z information gain new dimensions as the shapes are shown. From a square or circle in the 2D to a full 3D building structure. 

    Other analyses can then be done with a fully extruded 3D scene. Line of sight and viewshed analysis was a big part of this weeks training. These revolved around constructing sight lines, then building lines of sight. Whats the difference you ask? Constructing sight lines involves an observation point with known elevation and a target point with known elevation and generating a line between the two. Then, the line of sight utility is used to determine if there are any obstacles from the observer to the target. Buildings, terrain changes, trees or foliage features, etc can all block line of sight. A viewshed takes this a step further in being able to establish what is in view based on what elevation and field of view parameters.





















    To take it a step further and apply it to the real world, look at the news this week, there are all sorts of graphics being modeled and analyzed after former President Trump was shot at. Building models, sight line distances, camera vantage points, obstruction analysis. All going on in the real world this week is the exact substance of this module. 

    Regardless of the ongoing real world applications, this module was culminated in creating a shareable scene layer package. An example of the type of deliverable generated for this is below. 



















    Overall, these are all hugely relevant skills for GIS applications. They allow you to explore your data more in depth and provide much more immersive presentations. Onto the next week. 


V/r


Brandon



Sunday, July 14, 2024

M2 - Biomass Density Analysis

This is the first of two weeks working with Light Detection and Ranging (LiDAR). This week we are working with data acquired from the Virginia Geographic Information Network (VGIN). A LiDAR point cloud was acquired for one of the park and valley areas in the Shenandoah National Park.

With the singular point cloud several different products and transformations were made to derive the biomass density map below. The point cloud itself (seen in the second image) is a 3d feature layer as height is involved with each point in the cloud. The primary transformation involved deriving ground and elevation data to generate a Digital Elevation Model (DEM), and a Digital Surface Model (DSM).

Interestingly, there is quite a sequence of tool use to generate these deliverables.
- LAS to Multipoint > Point to Raster > Is Null > Con > Plus > Float > Divide

Note that this sequence is either transforming the data type, as in the LAS to point or point to raster. Or it is an adjustment to the cell values in the case of the remainder of the string. The Divide tool is different as it is a combination of the ground and surface data which provides for our final output below. 











The biomass density map above shows the cumulative height by pixel for the entire scene. The DSM and DEM scenes have been averaged together to give each cell a 0 – 1 value. This allows the higher values to show denser vegetation and the lower values to show less height or less dense areas. This is helpful to forresters because it can indicate areas of highest / densest brush. From the image here you can see that these areas follow the contours of the valley in the north / north east portion of the scene. The scene can also highlight the difference between lower scrub compared to the high trees, areas where plains may be compared to tree thickets. 














As described in the map above you can see the LiDAR point cloud which was then used to transform into the raster based DEM on the left. While all of the images above are the exact same area, they are transformations or translations of this point cloud. 

This was an interesting lab with significant tool usage, but it is overall interesting to see how it can be transformed from raw data to a useable product. Thank you.


v/r

Brandon

Thursday, July 4, 2024

M1 - Crime Analysis

    Have you ever wanted to be a Crime Stopper? While you might not reach that milestone with this module, but you can certainly become a better crime analyzer! Analyzing crime through spatial correlation and heat mapping is the name of the game in this module. 

    The overall goals of the module involved gaining familiarity with GIS analysis tools and processes that can help in crime analysis. These allow us to convey and illustrate crime rates, and help derive spatial patterns based on socio-economic characteristics. The same data set was used in three different processes to derive the outputs below. 

    Specifically, a grid cell analysis, kernel density analysis, and Anselin Local Moran's I analysis were all performed with 2017 homicide data for the greater Chicago area. The below are not fully finished maps that would otherwise incorporate our traditional map elements. This would include an actual title, legend, scalebar,  north arrow, and other potentially enhancing information. They are designed to showcase the same data in three different ways. They all highlight spatial clustering for the homicide data, or where the data suggest the highest rates or prevalence occurred in the subject year. A brief rundown of each is below.

Grid Cell Method:

    Happily, the ½ mile by ½ mile grid feature was provided for the study area. This grid was spatially joined with the homicide point feature for 2017. From there, only grids that actually had a homicide occurrence were desired, so they were selected by attribute.

    Of those cells with homicides, this study called for focusing on the top 20%, which resulted in 62 individual cells. That exported feature class was then dissolved to a singular feature. This was for visual presentation, not statistical relevance. 




Kernel Density Method:

    The Kernel Density tool takes a point feature class and transforms it to a raster output using a “magnitude-per-unit area calculation. For this specific tool, I utilized an output cell size of 100 sq feet, and a search radius of 2,630 sq ft or approximately ½ mile, the output of which generates density based image.

    From there, the mean value (2.76) is established, and to highlight the most dense areas I used three times the mean (6.71). These then, were the most homicide prone regions of Chicago for that year, 2017. From there, the image was reclassified using this breakdown into 2 classes, below 3 times the mean, and above. That output was then transformed via the raster to polygon tool. Because the output had 2 values, we only wanted the one that represented above three times the mean. A select by attributes process was used to gain only those areas above three times the mean, and exported as a standalone feature for display. 




Local Moran’s I:

    This process utilizes a normalization of the census tract housing data combined with the homicide data. As in previous processes, a spatial join was performed with the homicide feature and the census tracts. From there, a new data field was created to calculate the number of homicides per 1000 households.

    The Anselin Local Moran’s I tool, was then used to identify statistically significant clustering and outliers. Specifically, we want to identify areas with a high homicide rate in close proximity to other areas with a high homicide rate (HH areas). As opposed to other combinations that are high / low, low / low, low / high, respectively. Once the High/High areas were identified, they were selected using a SQL query, and exported to their own feature class. This class is then likewise dissolved into a single feature. 


    Overall, the provided instructions ensured that there weren't too many issues during the processing of this module. some of the most time consuming parts were in comparing the tables and validating the fields for the various joins that were used. Then selecting the correct inputs for the various SQL queries and attribute selection actions. It is quite interesting how the same data can be aggregated and presented in multiple different ways to draw different conclusions. Thank you.


v/r

Brandon













Special Topics - Mod 3 - Lab 6 - Aggregation and Scale

 Hello and Welcome back!  My how time has flown. It has almost been 8 weeks, and 6 different labs. There have been so many topics covered in...