Jul 14 – 19, 2024
Georgia State University College of Law
America/New_York timezone
Welcome to IMGS2024!

An image tells more than a thousand words: Mapping place perception through strhrougeet view images, crowdsourced stated preferences and artificial intelligence

Jul 18, 2024, 11:40 AM
1h 20m
Knowles Conference Center/Third Level-304 - Faculty Commons (Georgia State University College of Law)

Knowles Conference Center/Third Level-304 - Faculty Commons

Georgia State University College of Law

40
Show room on map
Board: 1
Poster GeoAI and Machine Learning Poster Presentations

Speaker

Marco Helbich (Utrecht University)

Description

Urban environments perceived as safe, pleasant, and walkable stimulate sustainable and healthy human behavior. However, obtaining in-situ data on the appearance of streetscapes and people’s perceptions of urban spaces is time-consuming, costly, and labor-intensive. To circumvent these limitations, AI-driven place assessments have gained momentum. Advances in AI and street view (SV) images enable the automatic extraction of reliable information visible in scenes not provided by any other data, including aerial imagery. We aim to model and map human perceptions of streetscape qualities through a newly developed deep learning model trained with open SV images based on crowdsourced stated preferences. We sampled from a million crowdsourced Amsterdam SV images, taken from Mapillary. We filtered the images to remove those with problems including inferior quality or poor lighting. We loaded the images into our mobile-friendly web app survey platform. Participants rated images on a 5-point scale regarding concepts like ‘greenness’ or ‘pleasantness.’ The mobile-friendly web app format meant participants could pull up this survey anytime and start swiping on their smartphone.
We collected the ratings and some demographic information to build deep learning models of the users’ responses. For each model, the output is either a rating, or a probability distribution over ratings, meant to simulate the outcome as if this image were shown to our survey participants. Our models can be used to generate millions of ratings and build continuous place-based perception maps, as well as compare how people from diverse backgrounds experience places.

Primary authors

Presentation materials

There are no materials yet.