24.9.25

Surfaces - TINs and DEMs

This week, I explored two elevation models: Triangulated Irregular Networks (TINs) and Digital Elevation Models (DEMs). Both create 3D terrains but differ in how they're generated and represented. Using ArcGIS Pro, I worked through various elevation data models and analysis techniques. Below is a brief overview of each part with screen captures illustrating key steps and findings.

Part A: Exploring Elevation Points and Vertical Exaggeration

  • I added TIN to a Local Scene in ArcGIS Pro and set it as the elevation source to visualize the terrain in 3D. Then, I draped a radar image over the TIN. To better see terrain features, I increased the vertical exaggeration to 2.0.

Figure 1: Close-up of the radar image draped over the TIN with vertical exaggeration applied.


Part B: Using a DEM to Develop a Ski Run Suitability Map

  • Using a DEM, I created a ski run suitability map by analyzing key terrain factors for ski planning: elevation, slope, and aspect. Each factor was reclassified and combined with weighted importance, highlighting the best areas for new runs. The 3D visualization (Figure 2) shows how terrain and suitability intersect, helping identify optimal ski zones.

Figure 2. Ski run suitability map with terrain exaggerated for clarity.


Part C: Exploring TINs

  • I explored the another TIN to better understand terrain characteristics by applying different symbology options: surface elevation, slope, and aspect. Adding triangle edges helped visualize the TIN’s structure and extract detailed values. The final symbology (Figure 3) uses graduated colors for slope, distinct edges, and contours to clearly represent terrain features.

Figure 3: TIN visualization showing slope gradation, edges, and contours.


Part D: Creating and Analyzing TINs

  • I created a TIN using elevation points clipped by a study boundary polygon, then compared contour lines derived from the TIN and from a spline-interpolated DEM. The TIN contours are more angular, reflecting the triangular network, while the DEM contours are smoother due to interpolation. Differences are most noticeable in areas with sparse elevation points, affecting contour accuracy (Figure 4).

Figure 4: Comparison of contour lines from TIN (angular) and DEM (smooth) with elevation points shown.

21.9.25

GIS Job Search & Sustainable Development

This week, I researched a GIS position that stood out both professionally and personally: a Cultural Resources GIS Fellow role with AmeriCorps, through Conservation Legacy. This remote position supports the National Historic Landmark Vulnerability Assessment Project, which uses GIS to identify climate-related risks like flooding and wildfires affecting historic sites. The role involves spatial analysis, updating metadata, and creating outreach content to raise awareness about the program.

What really drew me to this job was how it combines GIS with cultural heritage preservation and public service. I’ve gained experience with many of the skills it requires, including ArcGIS Pro, LIDAR, metadata standards, and technical writing. However, the AmeriCorps age eligibility rules mean I no longer qualify, but researching this role helped me clarify what kind of GIS work truly inspires me.

When I explored ESRI’s Sustainable Development industry overview, I noticed a strong connection between their framework and this position. ESRI highlights using GIS to support equitable, resilient communities and track progress toward global goals like climate action and cultural preservation. While this job focuses on historic sites, it reflects those values by blending spatial analysis, impact measurement, community engagement, and long-term resilience planning, all key themes in sustainable development.

This assignment showed me that even if a role isn’t achievable right now, it can still guide my career path. I’m motivated to keep developing my skills in GIS, cultural resource management, and community outreach. The overlap between my personal values and industry trends makes me excited about future opportunities to contribute to meaningful, sustainable change through GIS.

GIS Internship and Community

For my internship, I’m continuing my work at the UWF Archaeology Institute, where GIS is a big part of managing archaeological research. I’m one of three archaeologists at the Institute who regularly use GIS. Most of my tasks involve spatial data analysis and map production.

For the internship project, I’m updating a predictive model used to locate potential archaeological sites. The original model was built by a former staff member, and I’m building on it by incorporating updated environmental and cultural datasets. I’m also working with our GIS Specialist to create standardized metadata templates so our data is easier to organize and reuse in future projects.

To earn credit in the GIS Internship seminar, I’ll be completing the internship track (Group 1), which includes developing technical documentation, metadata, and the updated model as a portfolio piece. 

To start building a GIS community outside of the field of archaeology, I joined the Northwest Florida GIS User Group, which connects GIS professionals in the Florida Panhandle. There are no membership fees, and it's open to students and professionals, which makes it an easy way to stay connected to the local GIS community.

I also subscribed to ESRI's ArcNews to stay current with how GIS is being used in other industries and research.

17.9.25

Assessing Road Segment Data Quality

This week, I assessed the spatial completeness of two road datasets across a county: the federally maintained TIGER Roads and the local Street Centerlines. Completeness was measured by calculating the total length of roads within a grid overlay and comparing the results for each dataset.

At the county-wide scale, TIGER Roads had a greater total length, measuring 11,382.7 km, compared to 10,805.8 km for the Street Centerlines, a difference of 576.9 km. Using the assumption that more road length equals more completeness, TIGER appears more complete overall. However, more road data does not always mean better quality, as local datasets are often more current and detailed.

To evaluate spatial variation in completeness, both road datasets were clipped to the grid extent, then intersected with the grid to break road segments at cell boundaries. After recalculating geometry, road lengths were summarized per grid cell. These summaries were then joined to the grid, and a percentage difference in completeness was calculated using the local Street Centerlines as the reference, following a method similar to Haklay (2010).

Across the 297 grid cells:
  • 161 were more complete in the TIGER dataset
  • 134 were more complete in the Street Centerlines
  • 2 had equal road lengths
  • 2 had no road data in either dataset

The choropleth map below illustrates the spatial pattern of these differences. Red areas indicate where TIGER data had more road length, blue areas where county Streets data was more complete, and light gray tones show minimal difference. The 2 cells not show, or are the same dark gray as the background, are where there was no road data for comparison.



The analysis may be showing a pattern that while TIGER may be more complete overall, local data can provides more detail in specific areas.

14.9.25

Data Quality Standards

 

This week's lab focused on evaluating the positional accuracy of two Albuquerque road datasets: one from the city and one from StreetMap USA. Using high-resolution 2006 orthophotos as reference, I followed the National Standard for Spatial Data Accuracy (NSSDA) guidelines to assess horizontal accuracy.

I began in ArcGIS Pro by creating a fishnet to divide the study area into four quadrants. I then digitized 20 intersections that were clearly visible on the imagery and present in both road layers. Each point was entered into three feature classes ( ABQ test points, StreetMap test points, and reference points), with matching Point IDs. I confirmed that the sample met NSSDA criteria (minimum of 20 points, even quadrant distribution, and spacing greater than 10% of the study area diameter) using the Near tool to check distances.

Figure 1. Sampling locations used to assess road network positional accuracy.

After verifying point distribution, I used the Add XY Coordinates tool and exported all three datasets to Excel for analysis. There, I calculated squared differences, RMSE, and final positional accuracy. The results showed the city streets data has a horizontal accuracy of ±26.65 ft at 95% confidence, while StreetMap USA’s data came in at ±217.90 ft

While I initially assumed the city data was more accurate just by visual inspection, this lab showed how to formally test and quantify positional accuracy using industry standards. It reinforced the importance of knowing where your data comes from and how accurate it really is when doing spatial analysis. The NSSDA method provides a clear, standardized way to measure and report that accuracy.


10.9.25

Calculating Spatial Data Quality: Measuring GPS Accuracy and Precision

This week’s lab focused on evaluating the accuracy and precision of handheld GPS measurements using repeated waypoints. The dataset included 50 waypoints collected with a Garmin GPSMAP 76 unit at the same physical location. Since the points were scattered around the area, it wasn’t clear where the exact location was just by looking at the raw data. To get a better idea, I calculated an “average” waypoint by aggregating all the points.

After adding the waypoints to ArcGIS Pro, I created circular buffers showing 50%, 68%, and 95% of the points fall. These buffers illustrate how the GPS points cluster around the average location.

Figure 1. Map layout with projected waypoints, average point, and precision buffers (1m, 2m, 5m shown).

Horizontal accuracy, measured as the distance between the average GPS point and the true reference, was about 3.25 meters. Meanwhile, horizontal precision, which shows how spread out the individual points are around that average, was around 4.5 meters. This tells us the GPS readings were fairly accurate overall, but the individual measurements had a bit more variation.

To get a deeper understanding of the error distribution, I worked with a larger dataset and plotted a Cumulative Distribution Function (CDF).

Figure 2. CDF plot showing cumulative error percentage for all GPS points.

The CDF curve climbs steeply up to around 70%, then rises more gradually, showing that most points are pretty close to the true location, but a few outliers have larger errors. This matches the 68th percentile at about 10.09 meters. After that, the curve flattens out, meaning fewer points fall in those higher error ranges. The curve starts at the smallest error (0.02 m) and reaches 100% at the largest error (48.37 m). The median (5.98 m) sits about halfway up the curve, which fits what you’d expect. The RMSE (3.06 m) isn’t visible on the graph since it’s more of a summary stat. Overall, the CDF gives a clearer visual picture of how the GPS errors are spread out, way better than just relying on summary numbers.