Have you ever wanted to be a Crime Stopper? While you might not reach that milestone with this module, but you can certainly become a better crime analyzer! Analyzing crime through spatial correlation and heat mapping is the name of the game in this module.
The overall goals of the module involved gaining familiarity with GIS analysis tools and processes that can help in crime analysis. These allow us to convey and illustrate crime rates, and help derive spatial patterns based on socio-economic characteristics. The same data set was used in three different processes to derive the outputs below.
Specifically, a grid cell analysis, kernel density analysis, and Anselin Local Moran's I analysis were all performed with 2017 homicide data for the greater Chicago area. The below are not fully finished maps that would otherwise incorporate our traditional map elements. This would include an actual title, legend, scalebar, north arrow, and other potentially enhancing information. They are designed to showcase the same data in three different ways. They all highlight spatial clustering for the homicide data, or where the data suggest the highest rates or prevalence occurred in the subject year. A brief rundown of each is below.
Grid Cell Method:
Happily, the ½ mile by ½ mile grid feature was provided for the study
area. This grid was spatially joined with the homicide point feature for
2017. From there, only grids that actually had a homicide occurrence were desired,
so they were selected by attribute.
Of those cells with
homicides, this study called for focusing on the top 20%, which resulted in 62 individual
cells. That exported feature class was then dissolved to a singular feature.
This was for visual presentation, not statistical relevance.
Kernel Density Method:
The Kernel Density tool takes a point feature class and transforms it to
a raster output using a “magnitude-per-unit area calculation. For this specific
tool, I utilized an output cell size of 100 sq feet, and a search radius of
2,630 sq ft or approximately ½ mile, the output of which generates density
based image.
From there, the mean value (2.76) is established, and to highlight the
most dense areas I used three times the mean (6.71). These then, were the most
homicide prone regions of Chicago for that year, 2017. From there, the image
was reclassified using this breakdown into 2 classes, below 3 times the
mean, and above. That output was then transformed via the raster to polygon
tool. Because the output had 2 values, we only wanted the one that represented
above three times the mean. A select by attributes process was used to gain
only those areas above three times the mean, and exported as a standalone feature
for display.
Local Moran’s I:
This process utilizes a normalization of the census tract housing data
combined with the homicide data. As in previous processes, a spatial join
was performed with the homicide feature and the census tracts. From there, a
new data field was created to calculate the number of homicides per 1000
households.
The Anselin Local Moran’s I tool, was then used to identify statistically
significant clustering and outliers. Specifically, we want to identify areas with a
high homicide rate in close proximity to other areas with a high homicide rate
(HH areas). As opposed to other combinations that are high / low, low / low,
low / high, respectively. Once the High/High areas were identified, they were
selected using a SQL query, and
exported to their own feature class. This class is then likewise dissolved into a single
feature.
Overall, the provided instructions ensured that there weren't too many issues during the processing of this module. some of the most time consuming parts were in comparing the tables and validating the fields for the various joins that were used. Then selecting the correct inputs for the various SQL queries and attribute selection actions. It is quite interesting how the same data can be aggregated and presented in multiple different ways to draw different conclusions. Thank you.
v/r
Brandon