“Advanced Development of GIS-Based Databases and Visualization Technologies for Marine Environment Impact Assessment”
SuhyeonKim, KIM GUEN HA, JooYoung Park;
Poster Presentations
This study aims to effectively support marine environment impact assessments, which evaluate and manage the environmental impacts of marine use and development both before and after they occur, to reduce social conflicts and improve quality of life. We have established an integrated database by collecting and standardizing diagnostic, assessment, and predictive information, and developed an application for searching and visualizing this data. The system enables real-time monitoring of marine environmental changes through advanced analytics that visualize spatial patterns and time-series data, while GIS-based visualization tools help intuitively understand dynamic marine ecosystems. The updated database offers advanced analytical functions for precise detection and prediction of environmental changes, assisting officials, reviewers, and assessment agencies in making informed decisions.
Mapping technology was developed to visualize numerical data, such as water levels and flow rates, with direction and color. Pre-field evaluation technology provides regulatory zone information, and there are plans to expand the analytical scope to marine use zones. Data analysis technology supports quality inspection by comparing new observation and assessment data with existing data to detect anomalies. These technological results are set to be integrated into the Ministry of Oceans and Fisheries' marine environment impact assessment system.
“Advancing Hydrometeorological Data in Asia for Enhanced Water Resources and Climate Applications”
Natthachet Tangdamrongsub;
General Track
Hydrometeorological data are crucial for effective water resource management, weather forecasting, and climate adaptation. In Asia, a region known for its vast geographic diversity and varied climatic conditions, the lack of high-resolution data has been a significant challenge for addressing local-scale issues. Traditional datasets often have coarse spatial resolution (e.g., 10 – 25 km), limiting their usefulness for detailed, localized analysis. To address this gap, we have developed a pioneering dataset offering 1 km resolution hydrometeorological data for the entire Asian continent. This dataset includes essential variables such as precipitation, surface temperature, radiation, soil moisture, evapotranspiration, groundwater, and surface runoff delivering unprecedented detail and accuracy compared to existing coarse-resolution data. The dataset was created using advanced remote sensing techniques, land surface physics, and sophisticated data assimilation methods, ensuring both enhanced spatial resolution and accurate reflection of local conditions. Our dataset spans from 1940 to the present, providing a comprehensive historical archive and seasonal forecasts extending up to six months into the future. This combination of historical and predictive data makes it an invaluable resource for a variety of applications, including water resources, climate studies, agriculture, and disaster assessment. To validate the accuracy of our high-resolution data, we conducted extensive comparisons with satellite remote sensing products such as MODIS (Moderate Resolution Imaging Spectroradiometer), GRACE (Gravity Recovery and Climate Experiment), and SMAP (Soil Moisture Active Passive). These comparisons confirm that our dataset offers superior accuracy and finer detail compared to publicly available data.
“An introduction to OGC API–Moving Features with pygeoapi and MobilityDB”
Wijae Cho, Taehoon Kim, Tsubasa Shimizu, TRAN THUAN BANG, Hirofumi Hayashi;
Workshop Proposals
Moving feature data can represent a variety of phenomena, including the movements of vehicles, people, animals, and even weather changes. A moving feature is conceptually a geographic feature with dynamic properties over time. This means that a data model can cover not only locations but also non-spatial attributes. The data model can also support dynamic relationships over time between moving features.
OGC Moving Features standards are developed to provide application services for sharing and handling moving feature data in a standardized way. In particular, OGC MF-JSON (OGC 19-045r3) supports various types of moving feature representations in JSON format. OGC API–Moving Features–Part 1:Core (OGC API–MF Core) provides a standard and interoperable way to manage moving features data, which has valuable applications in transportation management, disaster response, environmental monitoring, and beyond. OGC API–MF Core also provides operations for filtering, sorting, and aggregating moving feature data based on location, time, and other properties.
This workshop will get you started with OGC API–MF Core and open source-based implementations, which are an extension of OGC API–Features. Specifically, the following items will be addressed in this workshop:
- Lectures
- Introduction of OGC Moving Features SWG
- Moving features conceptual data models
- OGC MF-JSON and OGC API–MF - Hands-on training
- MF-API Server extension and documentation with pygeoapi (maybe 0.17.0)
- MobilityDB with OGC MF-JSON
- Visualization with STINUUM
The below open sources will be used in this workshop:
- MF-API Server based on pygeoapi: https://github.com/aistairc/mf-api
- MobilityDB (and PyMEOS): https://github.com/MobilityDB
- STINUUM, visualization tools for MF-JSON: https://github.com/aistairc/mf-cesium
Each program will be installed using a Docker file.
Lastly, you can check many helpful information about OGC API–MF here: https://github.com/opengeospatial/ogcapi-movingfeatures
“An Open Source Supervised Semantic Road feature extraction”
Bharath Haridas Aithal;
Academic Track (Oral)
The concept of the smart city has gained popularity in recent decades as urban landscapes are being transformed with modern methods for collecting, distributing, and updating data. It utilises state-of-the-art technology to enhance the efficiency and interaction of urban infrastructure components, promoting a linked and adaptable urban environment. However, the smart city strives to improve infrastructure support and modernise traditional infrastructure upkeep. The road infrastructure plays a crucial role in providing intelligent transportation infrastructure, ensuring safety, and promoting sustainability. An optimised road infrastructure includes real-time information on traffic conditions and lane-changing alternatives, improving passenger safety and the overall driving environment. Yet, the crucial aspect of constructing an intelligent road infrastructure is in the astute administration of accessible resources, which in turn requires a complete method for handling data.
The fields of remote sensing, geospatial technology, and computer vision are being transformed by recent advancements in artificial intelligence approaches, the availability of high-resolution data, and the progress in high-processing computing systems. The integration of geospatial technology with deep learning methods is attaining significant milestones, especially in the field of object recognition and segmentation approaches. Specifically, they provide robust capabilities for managing, manipulating, and exchanging data. These technologies have subsequently transformed how geographic datasets are employed with other data sources.
However, based on various literatures reviewed, there is a lack of sufficient studies that have fully utilised the combination of geospatial technology and transportation systems. This project utilises open-source geospatial platforms to combine geographical datasets with deep learning segmentation outputs. The primary objective of this work is to thoroughly examine the concept of road as a semantic feature, specifically by incorporating geospatial information at the pixel level. The result will be a road vector, which may be used as input for several applications, such as map navigation, vehicle routing, and autonomous vehicle systems.
To achieve this, the study employed the UnetEdge architecture, based on free open-source software systems, to accurately identify and separate the road networks by analysing high-resolution remote sensing images. The model development utilises road edge information as a spatial data infrastructure to tackle the difficult problem of occlusion in road extraction activities. The proposed model is fine-tuned using Python scripting language and open-source libraries, including computer vision, TensorFlow, Keras and Segmentation Models. After obtaining the road network as output, they are transformed into raster images integrated with geographical information using GDAL and Rasterio software packages. The process is fully automated which provides road data as geotagged raster and vectors. In addition, the road characteristics are employed to determine the road width customised by algorithms for digital image processing. The resulting road vector will have unique road identifiers and their corresponding width measurements, which can be utilised for important geospatial applications. The uniqueness of the system resides in two aspects - (a) the proposed framework is fully automated without any manual interventions, and (b) the result will be road features represented as semantics.
The suggested framework has been utilised for both satellite and aerial photos. The famous openly available standard Massachusetts and Indian drone datasets have achieved overall accuracies of 96.71% and 97.25%, respectively. The mean intersection over union is widely regarded as the most optimal parameter for evaluating the expected output in segmentation tasks. The framework achieves mIoU values of 85.14% and 78.26% for Indian drone and Massachusetts, respectively. Additional statistical analysis reveals a favourable association between the road width estimation method and the observed road width values. The root mean square error (RMSE) error of less than 1 meter is considered insignificant based on previous research.
“Analysis of Urban Heat Island & Urban Sprawling of Dhaka Using Remote Sensing”
Nusrat Jahan Nilima;
Poster Presentations
In rapid urbanization of Dhaka, Bangladesh, the increasing Urban Heat Island (UHI) is of great concern. This study seeks to link urban expansion of Dhaka with Land Surface Temperature (LST) from 2000 to 2022 using open source remote sensing data and geospatial analysis tools. To develop sustainable urban planning strategies, it is essential to understand them. Existing studies on UHI in Dhaka and other fast growing cities have highlighted the important role of remote sensing in monitoring urban growth and its environmental consequences, however detailed long-term data based research using advanced geospatial techniques is required.
The following research examines how LST has changed along with urban growth in the city of Dhaka (2000-2022). This was achieved by using satellite images obtained from various online platforms including Google Earth Engine, NASA Earthdata and GAIA among others, to analyze the trends concerning spatial and temporal dynamics of land surface temperature using open-source remote sensing data. Changes in LST and urban footprint were analyzed using QGIS, an open-source GIS software using geospatial techniques over a period of 22 years from 2000 to 2022. The calculated key metrics were urban area growth rates and temperature variations which were then subjected to statistical analysis. The analysis showed that Dhaka’s urban area expanded significantly from 2000 to 2022 with 481.956 km², mostly located in the northern parts of the city. Certain parts like Savar, Keraniganj, Demra, Uttara, and Badda enjoyed immensely high development density while areas like Dahar and Nawabganj experienced minimal change. Both summer and winter temperatures showed upward trends while greatest increases were observed in the eastern, north eastern & south eastern region. In terms of a correlation coefficient value; there was a moderate positive relationship between urban footprint area (R=0.1351) and LST despite this weak correlation indicating that indeed there were other factors influencing LST immensely. Such results emphasize that sustainable urban planning needs to be adopted together with community-based initiatives aimed at lessening UHI effect within Dhaka and possibly other regions as well as better governance practices. From prioritizing green infrastructure to improving urban governance and encouraging inclusive practices among communities, practical recommendations abound. Such initiatives can enable Dhaka to move ahead with its growth trajectory in a healthier and more habitable citizenry by way of an improved urban landscape. Through this study, contributions are made towards understanding better UHI impacts not just in fast-growing cities but also the usefulness of geospatial analysis as well as open-source satellite images available for climate change studies. Future studies need to utilize multiple datasets and complex models across various cities’ contexts in order to manage urban heat island effects comprehensively.
“Application of Drone Images for Drainage Plan Generation at Village Level”
Harish Kumar Solanki;
General Track
In plain areas, the prime need of the village is to have a proper drainage plan along with a survey of villages to understand the proper water flow situation on roads and general slopes. It is necessary to have base maps to depict the existing properly and demanded assets with fair accuracy. For this, different sources, such as Cartosat 2 data from NRSC and Google Earth images, were analysed in the paper. Data resolution of Cartosat 2 data from NRSC and Google Earth images were not found suitable for depicting internal roads, houses, and other community infrastructure assets of the selected village. Further, there are certain limitations to using Google Earth's products and images. A drone survey of Hantra Village in Rajasthan State of India was done with the help of the North Eastern Space Applications Centre, Umiam, Meghalaya. A drone survey of the settlement areas of Hantra village was planned under the project. The prime objectives of the project were 1) to get high-resolution data as base maps for depicting the existing assets and digitisation of other features for relevant use related to rural development, 2) to get a 3D Digital Terrain Model (DTM) and Digital Surface Model (DSM) images for use in drainage plan generation for the village. Using a digital terrain model and digital surface model of the villages, the slope directions and accumulation points were identified, and drainage plan suggestions were provided for the village. The whole analysis was done using open-source QGIS software. The data collection of village infrastructure assets was done using the OSMAND mobile mapping application. Every Panchayat in the country spends significant public money annually for its development. The available satellite images are not of much use at the village level for fulfilling the requirements of proper depiction, planning and monitoring of assets. A drone survey is essential to the ‘Smart Village’ concept in this situation. If the survey can be repeated after 5-7 years, it will be the ultimate resource for monitoring the temporal development of infrastructure and natural resources. Villages' prevailing and perpetual problems can be handled well with the availability of original drone surveys in open-source GIS environments and free mobile mapping tools.
“Assessing Urban Sustainability with FOSS4G: Insights from Bangkok”
Chitrini Mozumder;
General Track
SDG 11 is the first standalone goal focusing exclusively on urban development: “Make cities and human settlements inclusive, safe, resilient and sustainable” consisting of 10 targets and 15 indicators. Out of several aspects, priorities for this Sustainable Development Goal (SDG) include inclusive and sustainable planning (SDG11.3), as well as the ability to plan and manage human settlements in a participatory, integrated, and sustainable manner. Urbanization plays a crucial role in sustainable development and has a direct impact on the achievement of SDG 11.3. The indicator SDG 11.3.1, categorized under Tier II (meaning the indicator is conceptually clear and an established methodology exists) is defined as the ratio of land consumption rate to population growth rate. By focusing on this indicator, cities can develop strategies that foster efficient land use, protect the environment, and improve the quality of life for all residents.
In order to assess SDG 11.3.1, rates of population growth and land consumption over time are calculated, typically on a local or regional scale. This indicator is computed using satellite data and geospatial technology, along with population estimates or data from census records. In order to represent changes in land use and land cover (LULC), the Modules for Land Use Change Evaluation (MOLUSCE) tool, a QGIS plugin, is the main focus of this work's exploration of the use of FOSS4G technologies for urban sustainability. Advanced spatio-temporal study of urban growth patterns is made possible by MOLUSCE, which integrates several machine learning methods, including Cellular Automata (CA) and Artificial Neural Networks (ANN). For this effort, MOLUSCE 4.0, a recent update released in August 2024, is tested.
In order to enhance comprehension of how SDG 11.3.1 aids in evaluating urban expansion, the Bangkok Metropolitan Region (BMR) case study is provided as an illustration. The land use efficiency in BMR between 2003 and 2023 was found to be 1.3, indicating that land consumption exceeded population growth. Furthermore, the overall built-up area expanded by 12.2%, as seen by the rise in the built-up area per person, which went from 250 m² to 258 m². This suggests that urban expansion is occurring outside of cities. This is due to a number of causes, including zoning regulations, the scarcity of land in core cities, and improved transit networks that link suburban and rural locations. Additional difficulties are predicted for BMR's land use efficiency, with rates predicted to reach 2.39 by 2053 and 3.77 by 2043.
This indicates that land consumption could accelerate even faster than population growth, emphasizing the importance of sustainable urban planning. In this effort, open geospatial tools like QGIS and MOLUSCE have become invaluable. These tools provide flexible and scalable solutions for monitoring urban expansion, helping cities like Bangkok plan for sustainable growth while addressing the complexities of rapid urbanization.
“Assessment of suitability of water resources use in freshwater, saltwater and brackish water areas: Application of geographic information system for water management towards ecological sustainability”
Pantip Kayee;
Academic Track (Oral)
This research seeks to monitor the water quality of the Mae Klong River tributaries in Samut Songkhram Province, Thailand, through the application of geographic information technology and spatial database analysis. The study aims to assess the spatial suitability of water resource utilization and to systematically track water quality across the region. Water samples were collected from three key areas: Mueang Samut Songkhram District, Bang Khonthi District, and Amphawa District, with a focus on examining the patterns of water use and the associated community lifestyles.This study aimed to determine the sampling points for water quality assessment across six water sources, categorized into two distinct seasons: summer and the rainy season. The analysis focused on key parameters, including Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), nitrate, phosphate, Total Coliform Bacteria (TCB), Fecal Coliform Bacteria (FCB), ammonia, and salinity, using the Water Quality Index (WQI) established by the Pollution Control Department. The water quality data were subsequently integrated into a spatial database in the form of GIS data. This data was then utilized to classify the water sources according to the WQI. The findings revealed that most water sources across the three study areas exhibited moderate to poor water quality. According to the WQI, the majority of water sources fell into categories 3 and 4, indicating their suitability for agricultural and industrial use. In the Mueang Samut Songkhram District, it was observed that water sources are currently utilized for salt farming, fisheries, and mangrove forest activities. The analysis also indicated that the water sources in this district exhibit high salinity levels, likely due to the influence of salt from the surrounding salt farming activities. Consequently, these high-salinity water sources are deemed suitable for salt farming and fisheries. In contrast, water sources in Bang Khonthi District are primarily used for tourism-related activities, such as floating markets. Meanwhile, the water sources in Amphawa District serve ecotourism purposes. However, water samples from two sources in Amphawa District exhibited TCB and FCB values exceeding surface water standards, indicating contamination from human and animal excrement. This contamination suggests that the water sources in Amphawa District suffer from poor sanitation conditions.
“Basic Python for Geospatial”
Feye Andal, Fritz Dariel Andal;
Workshop Proposals
This workshop offers a comprehensive introduction to utilizing Python programming for geospatial analysis and visualization. Geospatial data is essential in various domains such as environmental sciences, urban planning, agriculture, and disaster management. This workshop aims to equip participants with foundational skills to harness the power of Python libraries and tools for handling, analyzing, and visualizing geospatial data.
By the end of the workshop, participants will have a solid grasp of the core principles of geospatial data handling using Python. They will be empowered to create their own geospatial projects, capable of ingesting, analyzing, and visualizing spatial data to derive meaningful analysis.
“Battle of The Best Street-Level Imagery Collection Tool: A Workshop on KartaView and Mapillary using 360 cameras”
Janica Kylle De Guzman;
Workshop Proposals
As the world increasingly goes digital, real-world information becomes accessible online, enabling virtual visits to locations through street-level imagery. This imagery is invaluable for capturing daily life and sharing local perspectives, making it useful for finding attractions or services remotely. Liminal spaces, which might seem trivial, can offer crucial insights for those in need of specific information. Our interactive workshop will introduce participants to KartaView and Mapillary, covering how to access and contribute to these platforms while having fun through demonstrations using GoPro 360 cameras. Open to all, this four-hour session includes a playful scavenger hunt, turning learning into an adventure as participants hone their skills in capturing and sharing street-level imagery.
“Build an Object Snap to a Geometric Location on Web Application”
Siriwat Suttipanyo, Siriya Saenkhom-or;
Workshop Proposals
Object snapping is a fundamental feature in Geographic Information Systems (GIS) that enhances the accuracy and efficiency of spatial data editing and analysis. This technique allows users to seamlessly align and connect geographic features, ensuring spatial relationships are maintained and data integrity is preserved. By snapping objects to predefined points, lines, or polygons, GIS professionals can create more precise maps and models, which is crucial for applications in urban planning, environmental management, and infrastructure development.
The process of object snapping involves algorithms that detect proximity between features and automatically adjust their positions based on user-defined criteria. This capability not only streamlines the editing process but also reduces the likelihood of errors arising from manual adjustments.
As web mapping technologies evolve, the need for intuitive and efficient tools becomes increasingly important. Implementing object snapping in web map applications not only streamlines the editing process but also ensures that spatial relationships are maintained, thereby enhancing the overall quality of geospatial data. This session will explore various methodologies for developing robust snapping algorithms with HTML and JavaScript, highlighting how these solutions can improve user experience and practical to implement.
For those looking to create a web map application capable of managing data for real-world tasks, such as adjusting the position of a streetlight to a specified area or managing objects to snap to geographic locations, this workshop will address those needs using practical HTML and JavaScript solutions.
“Building a urban digital twin using open data, open source, and open standard, a mago3D way!”
Yeonhwa Jeong, Sanghee Shin, SUNGJIN KANG;
Workshop Proposals
In this workshop titled "Building an Urban Digital Twin using Open Data, Open Source, Open Standards, a mago3D way!", participants will embark on a hands-on journey to create a digital twin of a selected urban area in Thailand. Leveraging open data from Overture Maps and NASA's 30m resolution Digital Elevation Model (DEM), participants will learn how to integrate and process these datasets using open-source tools like mago3DTiler and visualize the final output in a Cesium-based 3D environment.
The workshop will focus on using open standards, specifically the OGC’s 3D Tiles format, to ensure compatibility and interoperability across platforms. Participants will begin by downloading and processing building data from Overture Maps and terrain data from NASA. These datasets will then be converted into 3D Tiles using mago3DTiler, enabling detailed and accurate 3D representations of the urban environment. The final visualization step will be performed using Cesium, where participants can explore the digital twin in an interactive 3D space.
This workshop is designed for GIS professionals, urban planners, and developers interested in the creation of urban digital twins using open technologies. By the end of the session, participants will have a comprehensive understanding of how to create, process, and visualize 3D urban data using open resources and standards.
“Building an Analysis-Ready Cloud Optimized Global Lidar Data (GEDI and ICESat-2) for Earth System Science applications”
Yu-Feng Ho;
General Track
Global Ecosystem Dynamics Investigation (GEDI) and Ice, Cloud, and Land Elevation Satellite 2 (ICESat-2) are earth observation missions from NASA to construct a three-dimensional model of earth surface in space and time empowered by Light Detection and Ranging (LIDAR). GEDI and ICESat-2 data are organized by orbit ID, sub-orbit granule and track, and distributed in HDF5 format, which is optimized for big data storage. However, this approach is inconvenient for extracting spatio-temporal areas of interest, because each file stores a track crossing a huge range of latitude and longitude, while lacking a spatial index.
To facilitate random access to small areas of interest, we propose a data reconstruction process through Apache Parquet. Parquet is an open source column-oriented data format designed for efficient data storage and retrieval. We sequentially stream raw data into spatio-temporal partitioning blocks (5 degree x 5 degree x year). This layout optimizes the number of partitions (n = 3337) and individual file size (~300 MB). Independence of raw data files and a predefined partititoning scheme enables parallel processing, and periodic update while new data is available.
During the reconstruction, we selected essential attributes and applied quality filtering based on scientific literature. We excluded GEDI shots with Quality Flag equal to 0, Degradation Flag larger than 0, or Sensitivity smaller than 0.95; For ICESat-2 ATL08, we first excluded segments where terrain and/or canopy height are in NaN. We then reconstructed individual photons from ATL03 by ph_segement_id, and excluded the ones classified as noise, as well as segments containing more than 28 photons, according to the result from previous research [1].
Data is finally converted to GeoParquet and published on a cloud server under CC-BY 4.0 license. GEDI Level2 has 1.4 TB, and ICESat-2 ATL08 has 3.8TB in total size respectively. GeoParquet supports two levels of predicate push down: first, at the partition level, and second, at the file level. The partitioning of the global LiDAR datasets enables coarse spatial (5 x 5 degree) and temporal (year) filtering.. The footer of each GeoParquet file enables spatial filtering via bounding boxes or geometry features, and temporal filtering using the datetime columns. Further attribute filtering is possible.
The concept of Analysis-Ready Cloud Optimized (ARCO) data has been defined and implemented for raster data, using technologies such as Zarr or Cloud Optimized GeoTiff (COG) [2]. However, corresponding implementations for vector data are scarce. This work delivers two instances of global ARCO vector datasets. It not only adheres to the concept of 4C (complete, consistent, current, and correct), but also tackles the challenge of organizing terabyte-scale geospatial vector data.
Reference
Milenković, M., Reiche, J., Armston, J., Neuenschwander, A., De Keersmaecker, W., Herold, M., & Verbesselt, J. (2022). Assessing Amazon rainforest regrowth with GEDI and ICESat-2 data. Science of Remote Sensing, 5, 100051.
Stern, C., Abernathey, R., Hamman, J., Wegener, R., Lepore, C., Harkins, S., & Merose, A. (2022). Pangeo forge: crowdsourcing analysis-ready, cloud optimized data production. Frontiers in Climate, 3, 782909.
“Building an Intelligent Geocoder with OpenStreetMap Data and Machine Learning”
Aadesh Baral, Kshitij Raj Sharma;
Workshop Proposals
In this workshop, participants will learn how to build an intelligent geocoder by using Natural Language Processing (NLP) techniques. The geocoder will be capable of accurately interpret user input and return precise geographic information.
This workshop aims to provide hands-on experience in:
- Extracting and preparing geospatial data from OpenStreetMap.
- Setting up and configuring search engine for indexing and searching geospatial data.
- Training and applying a basic NER model to understand user queries.
- Integrating these components to create a fully functional intelligent geocoder.
Requirements:
- Laptop with Python installed.
Target Audience:
This workshop is designed for GIS professionals, developers, data scientists, and anyone interested in geospatial technologies and natural language processing. Prior experience with Python and Machine Learning will be helpful, but not mandatory.
“Camera-LiDAR Fusion for multimodal 3D Object detection in Autonomous Vehicles”
Badri Raj Lamichhane;
General Track
The rapid development of autonomous vehicles (AVs) demands strong perception systems capable of reliably recognizing and classifying objects in complicated urban environments. To improve the reliability and precision of 3D object identification, combining camera and LiDAR sensors has emerged as a potential solution. This research describes a multimodal fusion framework that uses camera pictures and LiDAR point clouds to accomplish high-performance 3D object recognition in urban circumstances. Camera sensors provide precise color and texture information necessary for identifying traffic signs, pedestrians, and cars, whereas LiDAR provides exact depth measurements required for interpreting object geometry and spatial relationships. The indicated fusion technique improves detection accuracy by using these sensors' enhancing strengths, especially in difficult settings such as occlusions and fluctuating illumination conditions taking the KITTI open dataset. Here the OpenPCDet is used for 3D object detection and MMDetection for 2D detection as open library. Beside this the Facebook AI Research, Detectron2 flexible framework for 2D and 3D object detection tasks is more popular too.
Fusion is accomplished via a well built architecture that aligns and combines data from both modalities at various stages of the detection pipeline, such as feature extraction, region proposal, and classification. Advanced deep learning techniques, such as convolutional neural networks, are used to process and integrate multimodal input. Experimental results show that 3D object detection outperforms single-modality techniques in terms of robustness and precision, particularly when recognizing small and partially occluded objects.
“Campus Layers - Using GIS and AR to enhance campus experience”
Santosh Gaikwad, Jesal Zala;
General Track
Many institutional campuses in India are facing a range of challenges that can affect the overall experiences for students, faculty, staff, and visitors such as navigational difficulties, inefficient resource management, lack of integrated technology, safety and security concerns, accessibility issues, communication barriers, and sustainability and maintenance. To improve operations, increase efficiency, and enhance the overall quality of the campus, educational campuses are turning to smart campus solutions. The GIS-Enabled Smart Campus solution combines technologies like 3D GIS and Augmented Reality (AR) along with the integration with IoT (Internet of Things).
Considering the rapidly growing need for a smart campus, Nascent Info Technologies has developed a mobile application called “Campus Layers” that offers a comprehensive solution using open-source software technologies. At the core of this application is GIS technology, which aids in the visualization of the infrastructure facilities, GIS-based analysis and monitoring tools. For staff, this streamlines asset management, space planning, and campus operations, offering comprehensive mapping and data access for critical infrastructure like CCTV, fire extinguishers, and evacuation routes, ensuring campus safety. For visitors, the app facilitates easy navigation using GIS and AR features, helping them locate resources and facilities efficiently. Students benefit by staying informed about campus events, receiving notifications, and accessing grievance reporting features directly from their phones.
"Campus Layers" revolutionizes the campus experience with a user-centric, data-driven approach that prioritizes users' needs and preferences, fostering a safer, more efficient, and connected campus environment, enhancing both the educational experience and operational efficiency.
Keywords: Campus Management, GIS and AR, Mobile Application, IoT integration, User-centric approach, Open Source
“Catch them young! Geospatial Capacity building for School Children and Young adults”
Natraj Vaddadi;
General Track
Geospatial technologies are being widely used to address societal needs like land use, demographics, and natural resource management. Spatial data analysis play a crucial role in how we understand and interact with our environment. They involve using maps, GPS, and satellite images to collect, analyse, and display data about the world. Teaching these skills to school children has become increasingly important.
Geospatial techniques help students develop a better understanding of the world around them. Maps, for instance, are not just tools for finding directions; they tell stories about our environment, culture, and history. By learning to read and create maps, children begin to see the connections between different places and the events that shape them. Understanding these concepts early helps them develop a broader view of the world and how places are interconnected This kind of knowledge fosters a global perspective, encouraging students to think beyond their immediate surroundings and consider how their actions can affect the world.
As part of its mission to build awareness of the importance of Earth Science in daily life, the team at ‘The Centre for Education and Research in Geosciences (CERG), India conducts various activities aimed at laymen and schoolchildren. These events are conducted throughout the year. One such program is a workshop titled “Maps & Me”, which is focused on giving a basic understanding to school & college children on the geospatial world and open-source mapping tools. In the ‘Maps & Me’ workshop we explore the basics of maps, satellite images and digital maps and how to navigate using these tools.
The workshop is hands-on and interactive, covering key map elements like latitude, longitude, and scale, along with a session on using QGIS, a popular open-source mapping software. Participants are introduced to the fundamentals of remote sensing, satellite imagery, photo recognition, and digital mapping. After that, they get to create their own maps using QGIS.
At CERG we believe that such skills are important because they help students make sense of real-world issues, like climate change, urban planning, and natural resource management. By learning how to read and interpret maps, for example, young students can see how their local environment fits into the larger world. It also encourages them to think critically and solve problems creatively, skills that are valuable in all areas of life.
“Celebrating four decades of innovation: The GRASS GIS Project”
Markus Neteler;
Keynote Talk
The GRASS GIS project, a pioneering open source geographic information
system, celebrated its 40th anniversary in 2023. As one of the
long-standing contributors, I am honoured to reflect on the remarkable
journey of this leading open source geospatial software and community.
Over the past four decades, GRASS GIS has grown from a modest project
initiated by the U.S. Army Corps of Engineers to a robust, globally
recognised platform for geospatial analysis and modelling. This
evolution is a testament to the dedication and collaborative spirit of
the GRASS community, which has continually driven innovation and
excellence.
My personal relationship with GRASS GIS began over thirty years ago
when I was a student and first encountered its powerful capabilities.
Even then, I was fascinated by its potential to revolutionise spatial
analysis and environmental modelling. With the advent of the Internet,
we were able to build a passionate community behind the project.
Through collaborative efforts, we have significantly expanded the
functionality of GRASS GIS, improved its user interface through
multiple iterations, and ensured its adaptability to the ever-changing
technological landscape. In this keynote, I will reflect on the
milestones that have shaped GRASS GIS from its inception at the U.S.
Army Corps of Engineers' Construction Engineering Research Laboratory
(USA/CERL) to its current status as a cornerstone of the open source
geospatial ecosystem. The latest releases of GRASS GIS include
thousands of changes, including the new single-window GUI layout and
enhanced parallelization capabilities. These enhancements underscore
our commitment to improving the user experience and computational
efficiency. The past decade has also been marked by vibrant community
engagement through the OSGeo Foundation. I will highlight key
contributions from the global community, showcase ground-breaking
research and applications, touch on FOSS business models and explore
the challenges we have overcome along the way.
The future of GRASS GIS is bright as we anticipate further innovation
and expanded applications, driven by the same collaborative ethos that
has defined our past. Together we will continue to push the boundaries
of what is possible in geospatial analysis, ensuring that GRASS GIS
remains at the forefront of this dynamic field.
“Comparitive Evaluation of Machine Learning Models for Zoning Slope Failure Suceptibility: A Case study of Yen Bai Province, Vietnam”
Tran Tung Lam, Tatsuya Nemoto, Venkatesh Raghavan, Xuan Quang Truong;
General Track
Yen Bai Province in northern Vietnam, especially Mu Cang Chai (MCC) and Van Yen (VY) districts, are highly susceptible to slope failure due to rugged terrain, high rainfall and anthropogenic activities . In this research MCC was used as an area for training and testing the machine learning models, while VY serves for model validation due to similar topographic and geological conditions.
The methodology treats the slope failure prediction as a binary classification task (landslide/no-landslide). A balanced dataset of 286 landslide and 286 non-landslide points in MCC, along with 16 contributing factors, including topographic, geologic, hydrologic, anthropogenic and vegetation factors calculated from open data sources and made use from existing databases and from previous research on the area. Principal Component Analysis (PCA) and Pearson Correlation Coefficients refine the dataset by evaluating correlated factors and removing the least important ones, the size of the training dataset can be reduced while ensuring the performance of the ML models. Four ML models: Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), and Extreme Gradient Boosting (XGBoost) are trained and evaluated to select the best hyperparameter tuning for each model. Model accuracy is assessed via confusion matrices, accuracy score, ROC (Receiver operating characteristic) curves and AUC (Area under the ROC Curve).
Results show the models perform effectively in MCC with the average accuracy of all models being 0.74. The trained ML models with tuned hyper-parameters after running on MCC data, was validated on datasets for VY. The VY data also consists of 16 factors, with a data set of 308 landslide/non-landslide points. RF and XGBoost have the highest accuracy for both training and testing area (MCC) and Validation area (VY), with XGBoost showing a slightly higher accuracy score of 0.83 while RF scores 0.80.
The XGBoost model produces good results and could be further optimized to achieve even better zonation in future studies. The machine learning workflow can be applied on other areas that are prone to slope failures. Other geologic and weathering factors could be included in the analysis to further improve the model.
“Current Status and Challenges of the Internationalization and Localization of Technical Documentation for Geospatial Open Source Software”
Yoichi Kayama;
General Track
In recent years, the number of open-source software tools that handle spatial information has rapidly increased. The technical documentation for these software tools is often written in English. However, in non-English-speaking regions, the localization of technical information is a crucial task to expand local system usage and disseminate technical knowledge. By using the gettext library, it is possible to implement the internationalization of documents and programs. Sphinx, a documentation creation system, offers internationalization features using gettext, enabling the creation and management of documentation in multiple languages.
Japanese localization efforts have been made for software such as QGIS, PostGIS, and OSGeoLive, but continuous translation work has not been maintained for other software. Additionally, when performing technical translation, it is necessary to prepare translation rules and glossaries for the local language, which presents challenges.
In this presentation, I will report on the current status of the internationalization of technical documentation for spatial open-source software, Japanese localization efforts, and the challenges and issues faced.
“Design and Deploy Microservice for GIS Application apply OGC Standard”
Worrathep Somboonrungrod;
General Track
In the past, the installation of GIS applications often encountered challenges regarding the flexibility of services, which were unable to be scaled to accommodate a growing number of users. The interconnection and exchanging of data across services were constrained, and service separation was not feasible. These issues had a significant impact on overall usability.
The design and deployment of applications in the form of microservices are gaining popularity and widespread adoption. This approach aims to provide flexibility to the installation process, allowing services to be added or reduced as needed to align with usage requirements. It can subdivide services into smaller units to facilitate installation, following the principles outlined in The Twelve-Factor App (https://12factor.net/).
Nowadays, GIS application development has OGC Standards, which are standardized guidelines that define the process of storing and providing geospatial data. These standards encompass a multitude of aspects of geospatial information interoperability. Therefore, the principles of The Twelve-Factor App can be adapted to the design and deployment of GIS applications, ensuring compliance with OGC Standards.
This session will elaborate on how the principles of The Twelve-Factor App can be harmonized with OGC Standards, as well as the various technologies selected for the design and deployment of applications.
“Designing user experience and user interface for effective map applications.”
jirayut Narksin, Nichaphat Hongkeaw, Mayurachat Saechan;
General Track
Geographic information technology has become an important part of our daily lives, leading to the development of various map applications. The design of these applications requires special attention, particularly in terms of User Experience (UX) and User Interface (UI) design. Effective UX/UI design significantly impacts user satisfaction and ease of use, contributing to the overall efficiency and modernity of map applications. By understanding user needs through User Research and Usability Testing, we can establish principles and guidelines for creative design that enhance usability and ensure consistency between data presentation and user interaction.
Creating a map application that offers a good user experience involves careful design of map elements, such as the layout of basic map tools, the selection of symbols, the arrangement of data, and the design of interactions. The design must be suitable for different types of usage, such as travel applications, survey applications, or application for specific use. Additionally, it must support various devices, including mobile phones, computers, tablets, and other devices with different screen sizes.
Furthermore, techniques that can be applied in the design process are presented in this session to achieve the best results. This includes creating an immersive user experience by strategically using colors, fonts, and layout. Developing engaging and interesting ways to display information will help users feel more connected to the application. It is also important to stay updated with current trends in map application design to ensure that the developed applications are modern and responsive to global changes.
This presentation is suitable for designers, developers, and anyone interested in geographic information technology, especially those involved in developing map applications. It will provide design guidelines that can be applied to various projects, effectively meeting the diverse needs of future users.
“Developing a Web-Based Spatial Decision Support System (SDSS) Using Geoserver”
CHANDAN M C;
Workshop Proposals
This hands-on workshop delves into the creation of a web-based Spatial Decision Support System (SDSS) from the ground up, utilizing Geoserver as a key tool. SDSS development involves the integration of conventional and spatially referenced data, decision logic, and a web-based interface for spatial data analysis. The SDSS architecture comprises components such as Web Processing Service (WPS), Web Feature Service (WFS), Web Mapping Service (WMS), Geoserver/Map-server, and Geo-processing.
Participants will learn how to retrieve map features from a database, encode raw data into defined layers, and assess these layers within the core DSS. Sensitivity analysis aids in selecting the optimal alternative through a decision-making process. The resulting outputs are visualized through styled layers and a user-friendly graphical interface.
The workshop also explores the role of web servers in serving web content, processing HTTP requests, and delivering web pages, including HTML documents, images, style sheets, and scripts. Geoserver, an open-source Java-based software, is employed to view, share, and store spatial data on the web. It supports various spatial data formats and provides interoperability to publish data from diverse sources using open standards.
By the end of this workshop, participants will possess the skills to construct a robust web-based SDSS, empowering them to make informed spatial decisions using Geoserver and other essential web development tools.
“Development of 3D Mapping Library to Facilitate Photo Alignment with 3D Models”
Daisuke Yoshida;
Poster Presentations
In our laboratory, we are conducting research in collaboration with several municipalities to promote digital transformation (DX) in infrastructure maintenance by leveraging new technologies such as drones and deep learning. At the same time, we are broadly applying the research results to fields such as cultural heritage preservation. One of our past initiatives involved measuring the exterior and interior of Kishiwada Castle with multiple 3D laser scanners and making the resulting 3D data available as open data.
In our research on infrastructure maintenance, we are developing a web-based system that allows for the 3D management of infrastructure defects by mapping aerial photographs in 3D onto 3D point cloud data and 3D models obtained from drone surveys. To accurately align drone aerial images within a 3D space of real-world coordinates, both in terms of position and angle, requires advanced technology and a significant amount of labor. In this research, to automate and simplify this process to some extent, we have developed a 3D mapping library based on CesiumJS and introduced an example of mapping aerial photographs in 3D onto a 3D model of Kishiwada Castle with real-world coordinates.
By making the process of "photo alignment," which previously required extensive know-how and labor, more user-friendly, we believe that we can significantly reduce the burden on content creators not only in the field of infrastructure maintenance but also in various 3D content fields such as education (creating 3D teaching materials for geography, regional studies, and disaster prevention education) and regional revitalization (creating 3D content for virtual tours).
In the future, in addition to improving performance through revising the source code and enhancing the system’s performance, we plan to make design improvements to make it more intuitive to use and release it as open-source software.
“Development of Large-scale Trip Analysis toolkits for Vehicle-based GPS trajectories using Apache Spark and Open Data: a case study of taxis in Bangkok, Thailand”
Apichon;
Academic Track (Oral)
Urban planning and mobility analysis have traditionally been studied through observation or questionnaires, which can be time-consuming and costly. However, with the rapid advancement of technology, tracking devices are now being installed on individual vehicles to measure various values, particularly GPS signals. The location data collected is accurate and regularly updated. It can offer valuable insights into people's movements and behavior. However, the amount of trajectory data is substantial and continues to increase over time. Therefore, specialized platforms and skills are needed to analyze it. In this study, we develop large-scale analysis toolkits to extract insights from vehicle-based GPS trajectories. These insights include trip statistics, origin-destination analysis, and identification of hotspots. The toolkits are specifically designed to handle large-scale datasets using Apache Spark, which is an analytics engine for processing large volumes of data. It is capable of distributing tasks across a Hadoop cluster for efficient processing. In our analytics model, we created algorithms to reconstruct trips based on their type of mobility, and we also mapped trip locations using open data such as administrative boundaries and points of interest (POI). Finally, we showcased our approach using real-world taxi data from Bangkok, Thailand. We presented taxi travel patterns, service availability, POI hotspots, and processing performance.
“Drought Monitoring using Geospatial data”
Chaithra Chandran;
Poster Presentations
Drought is a natural phenomenon that occurs when water availability is significantly below normal levels over a long period and the supply cannot meet the existing demand. Monitoring drought is a very difficult task as a consequence of the intrinsic nature of the phenomenon, its spatial and temporal limits, multi-scalar variability and delayed impact. Modern technology has brought about advance like remote sensing that can now be used to monitor drought. This would be really vital in an agricultural country like India, that suffers from at least two droughts in a decade.
In this study, the upper Krishna river basin, lying in the state of Maharashtra has been chosen as study area. After extensive literature survey many popular drought indices were identified for different types of drought, both using remotely sensed data and using field data. Two such indices, SPI and NDVI, representing meteorological and agricultural droughts respectively, have been calculated and analyzed in this study for understanding the drought scenario in the study area. The data used for these calculations were precipitation data and MODIS NDVI product (MOD13Q1) obtained for the study area for a study period comprising of Rabi seasons from 2000 to 2012. The results include the GIS maps of the two types of droughts to represent the spatial extent of the droughts and also graphical representations of the temporal variation of the droughts. Further analyzing the indices, relevant relationships have been obtained between them, indicating how each varies with respect to the other. This can be a very useful prelude to establishing a drought prediction model for the region. Also it shows how freely available remotely sensed data can be used as a means to monitor drought in the region, using these relationships.
“EIA(Environmental Impact Assessment) combined with 3D opensource geospatial.”
Hakjoon Kim;
General Track
This talk presents a research case of an open source implementation of a task management system based on 3D spatial information web service to efficiently conduct and manage tasks in the field of environmental impact assessment (EIA), which combines very diverse specialties.
“Empowering Citizen Scientists for Safer and Resilient Communities: A Story of Creating a Metro Manila Climate and Disaster Risk Atlas through QGIS and OSM”
Janica Kylle De Guzman;
General Track
Disasters like typhoons, earthquakes, and flooding are inevitable, especially as climate change intensifies. This heightens the need for effective information sharing, which some governments have addressed by sending out timely digital alerts. While large organizations work to prepare communities for disasters, local knowledge often gets overlooked despite its critical role in understanding a hazard’s impact. Residents possess intimate knowledge of their surroundings, which becomes invaluable during emergencies for identifying evacuation routes and understanding the landscape. In creating a disaster risk assessment, selecting data sources is crucial, as demonstrated by the Metro Manila Climate and Disaster Risk Atlas. This atlas, created using QGIS and OSM, assesses hazards in Metro Manila—a region prone to a potential 7.2 magnitude earthquake due to the West Valley Fault System. Leveraging these tools, provides a comprehensive view of risks, empowering communities with vital information about the vulnerabilities and resilience of their locales. The project exemplifies how integrating local and digital knowledge fosters a safer, more prepared society.
“Enhancing Temporal Data Visualization: Integrating THREDDS Data Server with 'ol-plus' for Web-GIS Applications”
Suman Sanjel;
Poster Presentations
Various scientific datasets are available worldwide, including remote sensing data from satellites, computational models, reanalysis products, and tabular datasets. With the current widespread use of APIs and web applications, accessing and displaying these datasets online has been facilitated by open-source projects such as the THREDDS Data Server. This web server provides access to datasets through methods like OpenDAP, CDMRemote, HTTPServer, WMS, and WCS, enabling easier visualization of scientific data via WMS. However, handling temporal datasets on the web remains a challenge.
To address this gap in web-GIS application development, I have developed a JavaScript package named 'ol-plus.' This plugin, designed for use with the open-source mapping library OpenLayers, includes a range of features for managing temporal data. It automatically generates WMS layers for specified datasets, allows toggling between local and UTC time on the time slider, and includes an animation tool for visualizing data changes interactively on the map. Additionally, 'ol-plus' offers functionality to download animations, configure frame rates through an animation panel, and create interactive legends. This tool simplifies the management of layers from THREDDS Data Server versions 4 and 5.
“Enjoying Delicious Meals Using MapLibre: A Journey into Developing a MapLibre Module”
Shinsuke Nakamori;
General Track
In this session, I will introduce a module I developed for clustering icon image markers using MapLibre, with the theme of "finding nearby restaurants that look delicious." This is the first module I’ve ever created, and it’s designed with a very simple structure. Through this session, I hope participants will take away two key messages: that creating what you want to make is enjoyable, and that even if you’re not highly skilled, sharing your work can lead to valuable learning experiences.
“ESTIMATING PADDY YIELD IN SMALLHOLDER FARM SETTINGS USING A SPATIAL HIERARCHICAL APPROACH”
Uma Shankar Panday;
Academic Track (Oral)
Food insecurity ranks among the world's most critical issues. Achieving zero hunger and eliminating all types of malnutrition by 2030 remains a major hurdle. To keep up with the rising food demand by 2050, food production will need to grow by 50% from 2010 levels (Chakraborty and Newton 2011) and by 60% from 2005/07 levels (Alexandratos and Bruinsma 2012). Most of the increased demand is anticipated to be fulfilled by improved yields (Alexandratos and Bruinsma 2012). Cereal crops are essential for the human diet. These grains share approximately half the calories and 42% of the protein consumption in low-income countries (CIMMYT 2020). Cereals are crucial food staples especially popular in Asia and Africa. Hence, food security is essentially a reflection of cereal crop security within these continents (Goff and Salmeron 2004). However, cereal crop production varies globally, resulting in many countries depending heavily on imports to meet their minimum calorie requirements. The resilience of small-scale farmers is vital for the supply of food to both rural and urban communities. Smallholder farmers control the majority of the food production system globally. In 83 countries throughout Latin America, sub-Saharan Africa, and South and East Asia, they produce approximately three-quarters of food calories (Samberg et al. 2016).
Crop yield estimation using high-resolution satellite images is widely used. However, the combination of smaller agricultural farm parcels and a variety of crops makes it difficult to accurately estimate yields using publicly available satellite remote sensing data. In addition, smallholder farmers can not afford very high-resolution satellite images for temporal monitoring of their crops and yield estimation. In addressing these challenges and associated knowledge gaps, this study developed a data fusion method for estimating the yield of paddy. Data fusion involves integrating ground-based data gathered by youths and farmers through the Volunteered Geographic Information (VGI) approach, along with ultra-high-resolution images captured by Unmanned Aerial Vehicles (UAVs) to predict crop yield on a farm scale. In addition, a hierarchical upscaling model is illustrated for predicting paddy yield at larger spatial and administrative levels (like a municipality, a district, or a province) by utilizing satellite remote sensing data and soil characteristics.
This study's foundational architecture characterizes real-world features in the form of object hierarchy with a notion of ‘super object’ – ‘object’ – ‘sub-object’. The study estimated the paddy yield at three different hierarchical levels. i) Yield estimation at sampling locations inside plots using crop cutting. This represents the bottom of the hierarchy that is symbolized as a point. ii) Yield estimation by developing a relationship among yields at sampled locations (from the first case), farm management factors, and parameters derived from UAV images. This provides the yield at the farm level where UAV has been flown and representative crop yields have been measured at the sampled locations. It is symbolized as a line in the hierarchy. iii) Finally, the satellite remote sensing data is used together with soil characteristics and the yield data obtained from the UAV to estimate the yield for the entire administrative unit. This is referred to as a polygon in the spatial hierarchy yield estimation model. Thus, this study dealt with scale issues (Diane et al. 2004) at the three levels: sub-plot, farm, and administrative unit (such as a municipality, a district, or a province) levels.
Farmers and volunteers used Open Data Kit (ODK) Collect to collect data on crop characteristics, farm management practices, and yield measurements. These data were used to build relationships with UAV-derived data for crop yield estimation. RGB and Multispectral cameras mounted on UAVs were used to acquire ultra-high-resolution images which are used to derive plant height and several vegetation indices (VIs). The paddy yield was estimated at the farm level using UAV-derived data such as plant height, VIs, and farm management practices. At the municipality level, the yield was estimated using a spatial hierarchical method utilizing satellite-derived VIs, soil characteristics, and the UAV-derived yield as an intermediate-level input.
An R2 of 0.90 was obtained between ground-measured and UAV-derived plant height. Likewise, an R2 of 0.72 was attained between ground-measured and UAV-derived Normalized Difference Vegetation Index (NDVI). The yield at the farm level using UAV-derived VIs and farm management data was estimated with R2 of up to 0.82 and a standard error of up to 0.47 tons/ha. At the municipal level, the yield was estimated with an R2 of up to 0.78 and a standard error of up to 0.29 tons/ha. The mean and standard deviation of the yield in the municipality were found to be 5.23 tons/ha and 0.70 tons/ha respectively.
The spatial hierarchical approach is useful where yield estimation with the direct use of publicly available satellite remote sensing data is impractical. This is mainly due to the larger sampling area requirement, which is inappropriate for smallholder farming systems with tiny plot sizes. Likewise, the approach does not demand expensive instruments for threshing collected crop samples, which becomes mandatory when samples are collected from larger areas, adding additional costs. The study reliant on three pillars: i) free and open-sourced software and hardware platforms for in-situ data collection, ii) earth observation data including in-situ observations collected using the VGI approach, and iii) the integration of data using the spatial hierarchy approach. The research utilized the benefits of having low-cost communication devices such as smartphones, low-cost sensors, and cloud-based data storage platforms. It leveraged the potential of humans as sensors in gathering crop characteristics, farm management practices, and crop yield samples among several other data sets through volunteers and farmers using the VGI approach. The input from volunteers and farmers would be instrumental not only in data gathering but more importantly farmers would take ownership of the method, making it sustainable and more reliable for the growers. Moreover, the study utilized the potential of sensors on UAV and satellite remote sensing for estimating the yield correspondingly at the farm and administrative unit levels.
This study would assist in understanding the food production/security situations at the selected administrative levels and make data-driven mitigation plans. The utilization of low-cost instruments, free and open-sourced software, publicly available satellite remote sensing images, and the active participation of farmers in farm data collection make it a sustainable solution for low-income countries. The developed methods could be replicated with the adaptation of local variables in other parts of the country and the world, or applied to other cereal cropping systems.
“Expectation Testing for Web Map Applications”
Parichat Namwichian;
Poster Presentations
A part of the web map application, ensuring functionality and accuracy is paramount to delivering a reliable user experience. This abstract outlines a systematic approach to expectation testing for web maps, focusing on validating that these applications meet predefined criteria and user expectations.
Expectation testing involves setting specific criteria that web map applications must meet to ensure they perform as intended. This process includes validating map data accuracy, user interface responsiveness, and the effectiveness of interactive features such as zooming, panning, and layer management. The goal is to ensure that the web map not only displays information correctly but also responds appropriately to user interactions and integrates seamlessly with other system components.
By implementing a robust expectation testing framework, developers can identify and address potential issues before deployment, ensuring that web map applications deliver accurate and efficient performance. This process not only enhances the quality of the application but also builds user trust by meeting or exceeding their expectations.
“Experience Digital Twin applications enhanced by AI prompts”
Hanjin Lee, Hyeeun Ahn, Jaeseon Kim, Heejin Ha;
General Track
The range of AI applications in the geospatial information field is diverse, object detection, area extraction, change detection, and super-resolution. In our industry, we collectively refer to these technologies as GeoAI.
However, these technologies remain primarily confined to specialized groups, making it difficult to consider them as universal technologies ready for everyday use.
In light of this, we sought to explore areas that could be easily accessible to the general public. Consequently, we developed 'magoGPT', a application enabling users to manipulate maps and interact with 3D objects using natural language in a digital twin environment.
Based on a 3D FOSS architecture composed of CesiumJS, it uses 3DTiles buildings, Terrain from high-resolution DEMs, and multiple layers visualisation. It is also related to artificial intelligence techniques such as Large Language Models (LLM), Speech-to-Text (STT), and Natural Language Processing (NLP).
In this presentation, we'd like to introduce magoGPT, the technology behind it, and the development process.
“Exploring Segment Anything Model's Potential in Geospatial Data: Case Studies for Landslide and Forest Canopy Detection”
Nobusuke Iwasaki, Ayaka Onohara;
General Track
In recent years, the availability of geospatial data has significantly increased, with aerial photographs and Digital Elevation Models (DEMs) becoming widely accessible as open data. Additionally, the acquisition of high-resolution image data through drones has become more feasible and commonplace. However, extracting meaningful information from these vast datasets remains a labor-intensive process, often requiring significant time and resources.
While various deep learning techniques have been employed to address this challenge, they typically demand extensive effort in collecting and preparing training data. In light of these constraints, the Segment Anything Model (SAM) has emerged as a promising solution. SAM, a recent development in the field of fundamental models, offers the advantage of zero-shot classification without the need for specific training. Moreover, its Apache-2.0 license ensures accessibility for a wide range of applications.
This presentation aims to demonstrate the potential of SAM in the realm of geospatial information processing. We will showcase practical applications of SAM in analyzing geospatial data, with a focus on two critical areas: landslide detection and forest canopy mapping. These case studies will illustrate how SAM can efficiently process and extract valuable insights from complex geospatial datasets, potentially enhancing the efficiency and effectiveness of our approach to environmental monitoring and disaster risk assessment.
“FLOOD RISK SUSCEPTIBILITY MAPPING USING GOOGLE EARTH ENGINE AND GIS IN THE KOSHI RIVER BASIN”
Nishan Bhattarai;
Academic Track (Oral)
Open-source software is crucial in flood analysis, providing affordable tools for detailed assessments. This study uses Google Earth Engine (GEE) and QGIS to conduct time series analysis and prepare flood inventories for the Saptakoshi River Basin with Sentinel-1 Ground Range Detected (GRD). Flood extents were derived by analyzing sequential satellite images from 2019 to 2023. Sentinel-1 data, filtered by date and processed using the Interferometric Wide (IW) mode and VH polarization, provided high-resolution imagery suitable for precise flood mapping. Focal median filtering reduced speckle noise, enhancing flood delineation. Additionally, DEM processing involved calculating slopes and masking out areas with high slopes focusing on low-lying flood-prone regions. GEE enables the collection and processing of historical flood data, which is crucial for preparing flood inventory. Further, QGIS was used to refine the analysis through raster reclassification and weighted sum tools to generate flood risk susceptibility maps. Similarly visualization of flood patterns in the Saptakoshi River was done, highlighting the spatial and temporal dynamics of floods.
“Flood Susceptibility Mapping in Nepal’s High Mountains Using Band Ratios and Machine Learning on Google Earth Engine”
Narayan Thapa;
Poster Presentations
This study presents a comprehensive approach to flood susceptibility analysis in Nepal's high mountain regions, specifically focusing on the Melamchi River using Google Earth Engine. Utilizing Sentinel-2 data, three different band ratios - Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Water Ratio Index (WRI) - were employed to map flash flood occurrences. Each product was meticulously compared to ascertain the most effective band ratio for such terrain.
Subsequently, this data was integrated with other parameters including slope, Digital Elevation Model (DEM), hydrological conductivity, land cover, landform, and accessibility to map the flood susceptible areas. The susceptibility mapping was conducted using the Google Earth Engine and the machine learning algorithm Random Forest.
The results of this study provide valuable insights into threshold determination and flood susceptibility, contributing to more effective flood management strategies in Nepal's high mountain regions. This research underscores the power of combining band ratios and machine learning algorithms in environmental hazard analysis.
“FOSS4G and Sustainable Development Goals in Asia and the Pacific”
Hamid Mehmood;
Keynote Talk
Large Language Models (LLMs) have revolutionized numerous aspects of modern life, demonstrating remarkable capabilities in language processing, code generation, and knowledge synthesis. Their potential extends to supporting the achievement of various Sustainable Development Goals (SDGs), offering innovative approaches to tackling complex global challenges. One promising application area is the mapping and monitoring phenomena measurable through Earth Observation (EO) data. It's estimated that around 40 of the 169 SDG targets and 30 of the 232 SDG indicators could benefit from the insights provided by the EO data analysis. The use of artificial intelligence (AI) for EO data analysis can further improve the number of SDG indicators that can be monitored with a higher accuracy and frequency.
In this context, research is underway to develop multimodal LLMs capable of directly processing EO data. However, these models are often computationally expensive to train, develop, and maintain, making them less feasible for low-capacity, high-risk countries that urgently need technological solutions for disaster mitigation. To address this challenge, we introduce SATGPT (accessible at satgpt.net), an innovative solution that leverages the current capabilities of LLMs and integrates them with cloud computing platforms and EO data. SATGPT represents a fully functional, innovative spatial decision support system designed for rapid deployment, particularly in resource-limited contexts.
This talk presents an instance of SATGPT configured for flood mapping, as an example. It simplifies the process with a user-friendly interface requiring only a prompt specifying flood duration and location. SATGPT leverages LLMs to generate GEE code dynamically, access historical databases, or perform unsupervised classification to detect flooded areas. This innovative integration of LLMs with GEE enhances the speed, accessibility, and real-time capabilities of flood mapping, making it more accessible to non-specialists and supporting resilient disaster management practices.
Furthermore, to build the capacity to use these technologies effectively, the talk discusses the development of an online, free, and self-paced course titled "Introduction to Geospatial Data Analysis with ChatGPT and Google Earth Engine." This course introduces participants to the fundamentals of ChatGPT and the Earth Engine Code Editor platform, empowering them to process and interpret geospatial data effectively outside the SATGPT. The innovative aspect of the course is developing the Geo-prompt engineering (GPE) concept, which focuses on using spatial, temporal, and satellite sensor- specific information in the prompt engineering process. The course aims to foster broader adoption of SATGPT and similar tools, equipping users with the knowledge and skills needed to leverage advanced technologies in disaster management. This talk is structured to provide a comprehensive overview of SATGPT and its contribution to enhancing flood mapping and disaster management in the Asia-Pacific region.
“From Complexity to Clarity: An Intuitive 3D Map Application Development Experience with Cesium and Svelte”
SUDA;
General Track
This session will explore how combining the powerful 3D mapping capabilities of Cesium with the intuitive and efficient web framework Svelte can simplify the development of 3D map applications. We will demonstrate practical examples, such as data binding and custom stores, to show how these tools can make working with Cesium more straightforward and intuitive. Participants will gain new insights into using Svelte and Cesium together, making complex 3D geospatial projects more accessible and manageable. This session is ideal for frontend engineers looking to enhance their development experience in the growing field of 3D mapping.
“From Data to Insights: The Impact of Generative AI on IoT-Based Environmental Monitoring”
Dongpo Deng;
Academic Track (Oral)
The integration of Internet of Things (IoT) technology with environmental monitoring systems has significantly enhanced the ability to collect real-time data from diverse and remote locations. However, the challenge lies in efficiently analyzing and interpreting this vast amount of data to make informed decisions. This paper explores the application of generative AI in IoT data analysis for environmental monitoring. Generative AI, with their advanced natural language processing capabilities, offer a novel approach to processing and understanding complex data patterns. By leveraging Generative AI, it is possible to automate the identification of critical environmental changes, predict trends, and provide actionable insights with unprecedented accuracy. This study demonstrates how Generative AI can enhance data analytics in environmental monitoring through case studies that highlight improvements in air quality assessment (e.g. PM 2.5). The findings suggest that Generative AI not only streamline the data analysis process but also enhance the reliability and responsiveness of environmental monitoring systems. Consequently, this research underscores the potential of Generative AI to transform IoT-based environmental monitoring, promoting more proactive and effective environmental management practices.
“GeoCambodia: A web application to visualize Cambodia Then and Now through aerial photographs and satellite images.”
Chamroeun YORNGSOK;
General Track
The French National Geographic Institute (IGN) came to Cambodia between 1952 and 1954 to carry out a large-scale photographic project, taking around 11,000 aerial images over a large part of the country.
Today, this collection represents an exceptional archive that takes us back 70 years to the urban and rural landscapes of the time. In addition to these early aerial photographs, there is a higher resolution shot of Phnom Penh in 1993, with incredible details of life in the streets of the capital. This archive is of great interest in many fields, including history, geography, archaeology, urban planning, and ecology. The purpose of the project is to dematerialize this archive and make it accessible and usable free of charge in Cambodia.
All the images were digitized and supplied to the KHmer Earth OBServation (KHEOBS) laboratory to create orthophotographs. Processing has been completed for the municipality of Phnom Penh, for the years 1953 and 1993. In order to make these images viewable by anyone, a web application, GeoCambodia, was developed to visualize Cambodia then (past) and now (present). The user-friendly interface includes an interactive slider to navigate and compare the old 1953 and 1993 aerial orthophotographs with the recent Google Earth images. Also, vector outlines of buildings from 1993, produced by the Atelier Parisien d’Urbanisme (APUR), have been integrated to enable visitors to click on a building and view the APUR’s descriptive architectural sheets. Other functions and the extension of the aerial images to the whole of Cambodia are still to come in this interface.
GeoCambodia.org targets anyone who is interested in aerial and satellite imagery and how Cambodia evolves through time and space, especially geography enthusiasts.
“Geographic object-based image analysis with Orfeo toolbox for detecting illegal cultivation on public land”
Yong Huh;
Poster Presentations
The illegal use of public lands poses significant environmental and economic problems to environmental conservation and land management. This study employs publicly available spatial data provided by government agencies and QGIS with the Orfeo Toolbox (OTB) to detect and monitor the illegal activities. By integrating high-resolution aerial imagery with Geographic Object-Based Image Analysis (GEOBIA) provided by the OTB, land use activities such as construction of buildings or cultivation of crops could be extracted from the imagery, then be compared to the land management spatial data in the QGIS environment. The study method includes segmenting the imagery into meaningful spatial objects with GEOBIA technique, accessing public administration data, geo-referencing the data into spatial data and comparing the objects and the spatial data with several spatial query with QGIS functions to identify illegal activities on public lands. The proposed method was applied to the Gangwon region in South Korea, and as a result of evaluating the accuracy, the possibility of automating public land management with public spatial data provided the government was confirmed.
“Geospatial climate and environmental monitoring for health surveillance”
Vincent HERBRETEAU;
General Track
Introduction/Background:
The consequences of climate and environmental changes on health are now obvious to communities, institutions and researchers alike. The impact of these changes must now be considered in an operational way in health management, in order to anticipate their effects, prevent them or mitigate them where possible. In practice, there is very little routine real-time use of space observation data in the public health sector, despite the increasing availability of space data. Indeed, space observation technologies have been constantly evolving since the 1970s, and now offer a wide range of data at different spatial, temporal and radiometric resolutions. More recently, access to data acquired by satellite has greatly improved, with free, massive data and easier processing. This offers the possibility of supporting health surveillance at various scales, which will be explored in this presentation using different examples from South-East Asia.
Main Aim/Purpose:
This presentation aims at raising the issue of integrating environmental and climate change indicators into health monitoring, by presenting practical tools and case studies that are operational in Southeast Asia. It will provide an update on the needs for further operational implementation that will benefit health monitoring.
Methodology and Findings:
The presentation will focus firstly on the development of a web platform that aims at modeling suitable climate and environmental conditions for leptospirosis through Earth observation, over the agglomeration of Yangon, Myanmar. Leptospirosis is a bacterial zoonosis that remains rarely diagnosed in Southeast Asia despite a high morbidity as shown in several active investigations. It is strongly associated to water and seasons with epidemics following heavy rainfall and flooding episodes. In the frame of the ECOMORE 2 Project (coordinated by Institut Pasteur and funded by the French Agency for Development - AFD), the locations of leptospirosis confirmed cases (vs non-leptospirosis controls) included in 2019 and 2020 were analyzed retrospectively. Time series of vegetation, water, and moisture indices from Sentinel-2 satellite imagery (available at 10 meters spatial resolution, every 5 days, from the European Space Agency, Copernicus Program) were produced to describe the dynamics of the environment around the locations of residence. This process relies on the use of the Sen2Chain processing chain developed in Python and open (https://framagit.org/espace-dev/sen2chain). The most relevant indices were used to build a spatiotemporal prediction model of positive vs negative locations. This model was spatialized on homogeneous landscape units from the point of view of land use, and describing the whole study area. The acquisition of Sentinel-2 images, their processing and the modelling were then automated as soon as a new image is available (every 5 days). An online platform, named LeptoYangon (https://leptoyangon.geohealthresearch.org/), was developed with R and R-Shiny to display this dynamic mapping of suitable environments and inform the epidemiologists and physicians of the study, in the frame of the ClimHealth project (funded by CNES and accredited by the Space Climate Observatory International Initiative). This fully automated tool allows retrospective consultation at any date since the first Sentinel-2 image was available in March 2016 (over 7 years). By clicking on the map, the user can select a landscape unit and view the temporal dynamics of the risk for that unit (i.e. whether the risk is increasing and decreasing). The user can also view the vegetation, water and moisture indicators to get an idea of the environmental data more specifically. This platform was designed to be used by epidemiologists and physicians to visualize the most at-risk areas and those where the risk is increasing, in order to raise physicians' awareness of leptospirosis (often confused with other fevers).
Discussion:
Implementing this tool in other territories is facing 1) methodological challenges regarding the volume of satellite data to be processed and 2) the need for detailed knowledge of the ecology of leptospirosis and exposure factors to adapt the models in different contexts. However, this already operational tool opens the way to the development of climate and environmental monitoring systems to increase the vigilance of healthcare workers and populations to the risk of leptospirosis. This also shows the relevance of developing specific tools for other diseases associated to climate and environment. At a country or regional scale, it is mainly meteorological variations and climatic anomalies that are relevant to the surveillance of certain diseases, such as dengue fever. The presentation will finally review the development of a national early warning system in Cambodia based on the acquisition of such climatic data.
“Hazard Map Game: Learn and Play with Open Data for a New Approach to Disaster Preparedness for Kids”
SUDA;
Poster Presentations
This presentation introduces an innovative educational game designed to teach children how to assess disaster risks based on geographical features and hazard maps. Utilizing open data and interactive digital signage, the "Hazard Map Game" transforms traditional paper-based hazard map education into an engaging, digital learning experience. Through this game, children can intuitively learn about various disaster risks such as tsunamis, floods, and landslides, while competing in quizzes and earning points. The game aims to deepen children's understanding of disaster preparedness and risk assessment, fostering a generation better prepared to manage natural hazards.
“Historical Analysis of Post-Monsoon Rice Fields in Myanmar with Optical and Radar Data”
Hafsah Fatihul Ilmy, Sarah Kanee, Daniel Marc dela Torre;
Academic Track (Oral)
Mapping and tracking rice cultivation is crucial for agricultural planning and food security, particularly in Myanmar, where rice is one of the major crops. Myanmar has been in conflict in recent years, making a large number of its population vulnerable to food insecurity. However, the political situation has made it difficult to acquire reliable estimates of agricultural production in several areas of the country. Alternative sources of information, such as satellite imagery and remote sensing, are needed to supply accurate data for crop management and to aid humanitarian agencies in prioritizing the distribution of food aid and better support for affected communities.
This study leverages Google Earth Engine’s open data and tools to map post-monsoon rice fields in Myanmar using optical and radar data from Sentinel-1 and Sentinel-2 from 2018 to 2021. The primary objective is to generate comprehensive maps of rice fields, revealing patterns and changes in rice production over the years during the post-monsoon season. This analysis provides insights into the impacts of climate variability and agricultural policies, aiming to support sustainable practices and enhance food security.
The study focused on eight main rice-growing states and regions in Myanmar, analyzing rice phenology and cultivation practices. The methodology involved combining radar data from Sentinel-1, which penetrates clouds and aids in detecting rice phenology, with optical data from Sentinel-2, which offers spectral information for identifying vegetation and understanding growth stages. The satellite imagery underwent preprocessing to align spatially, reduce cloud cover, and correct atmospheric effects.
Using local knowledge of rice, training datasets were prepared and interpreted in Google Earth Engine and supplemented with limited field campaigns. To help capture growth stages and distinguish rice fields from other crops, spectral indices — such as the Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Water Index (NDWI) — were included as well as topographical data such as elevation and slope.
A random forest classifier was then trained to create rice probability maps, and a probability threshold of greater than 60 percent was used to determine rice growth. The model demonstrated an average overall accuracy of approximately 87 percent. The estimated rice area was 1,265,003 hectares in 2018, 1,340,308 hectares in 2019, 1,197,913 hectares in 2020, and 1,175,431 hectares in 2021. According to our estimates, Ayeyarwady consistently produces around 400,000 hectares of rice a year, making it the region with the highest rice production. The model results are comparable to published government figures, providing additional validation for this method as a reliable and efficient way to monitor rice production. The study revealed decreasing areas in post-monsoon rice production with notable exceptions that may be attributed to climate variability, transplanting timing, and market shifts. Understanding these trends is essential for developing adaptive strategies that can mitigate the impacts of these factors on rice production and ensure food security. This work was implemented under the SERVIR Southeast Asia program, a joint USAID and NASA initiative. To promote transparency and accessibility, seasonal rice area estimates are published and available at SERVIR ADPC Publications (https://servir.adpc.net/publications). Additionally, rice maps can be accessed through the Myanmar Landscape Monitoring Dashboard (https://myanmar-me-servir.adpc.net), a public portal designed to disseminate this crucial information. The integration of optical and radar imagery in Google Earth Engine provides an effective approach for detecting post-monsoon rice and underscores the benefits of open-access data for advancing geospatial analysis and promoting sustainable agricultural practices, most importantly in data-scare or conflict-affected regions. This approach could also offer a scalable and replicable model for other regions facing similar challenges. The use of advanced remote sensing technologies and machine learning algorithms represents a significant step forward in agricultural monitoring and planning, paving the way for more resilient and sustainable food systems.
“How a FOSS4G-Born Business Grew in Japan: The Journey of MIERUNE”
Yasuto FURUKAWA;
General Track
When considering the sustainability of open source as a future digital public good, the cycle of success and contribution in the business sector becomes crucial.
MIERUNE was born from the FOSS4G community in 2016 and has spent the past eight years solving client challenges as a location-based systems integrator, growing steadily in the Japanese market.
Through these business activities, MIERUNE not only actively gives back to the FOSS4G community—both technically and financially—but also creates local employment for GIS engineers, helping to build a sustainable society.
In this presentation, we will share specific examples of the challenges we have faced to support the growth and development of FOSS4G companies in the Asian region, thereby contributing to the sustainability of the whole community.
“How representative is Open Map Data ? Can I trust it ?”
Ashmeera Dahal;
Poster Presentations
Many Open source data have been released so far , Microsoft open buildings , Google open buildings , Existing OpenStreetMap Buildings and Overture buildings. Question is how representative their data is ? My case study is in Nepal where I tried to compare the buildings representation in terms of its population no of households from government data and also spatial representation between the available dataset to figure which one if close approximation of the reality . People say that globally released datasets are more accurate in US , Europe but are they also relevant in developing countries ? Join our talk to figure out !
“Hydrogeophysical Analysis of Vertical Electrical Soundings for Groundwater Potential and Aquifer Vulnerability Evaluation in the Federal Capital Territory, Abuja, Nigeria”
DANLAMI IBRAHIM;
Academic Track (Oral)
According to the United Nations World Water Development Report, groundwater accounts for 26% of the world's renewable freshwater, with around 2.5 billion people relying primarily on it for basic water needs. The most realistic and cost-effective strategy to increase universal access to clean water, meet the 2030 sustainable development goals (SDG), and minimize climate change impacts is broad exploitation and management of groundwater. The study area is Nigeria's capital Abuja, generally characterized by moderate precipitation and few surface water sources. The water treatment plant, designed with a capacity of 10,000 cubic meters per hour of treated water, was aimed at supporting a population of 500,000 people 34 years ago. However, due to population growth and urbanization, the water supply is no longer meeting demand. Groundwater demand and consumption in Abuja have increased significantly over the last decade due to fast population expansion, urbanization, and industrialization. Understanding groundwater potential and aquifer vulnerability is critical for sustainable resource management.
Geologically, Abuja is underlain by Precambrian rocks of the Nigerian Basement Complex, which cover approximately 85% of the land surface, and sedimentary rocks, which cover approximately 15%. In the study area, four significant lithologic units are visible; these include the Older Granites, the Metasediments/Metavolcanics, the Migmatite-Gneiss Complex, and the Nupe sandstones of the Bida Basin, which occupies the southwestern region of the territory.
This study aims to map groundwater potential and aquifer vulnerability zones using Hydrogeophysical method, which incorporates geoelectrical resistivity through vertical electrical sounding (VES) and geographic information system (GIS) approaches. With a maximum current electrode separation (AB/2) of 100m, the Schlumberger electrode configuration was used to acquire the field resistivity data in 823 locations across the study area using a DC resistivity meter (Campus Ohmega Ω).
The resistivity method works by passing an electric current into the ground through two electrodes and measuring the consequent potential difference across two other electrodes. The electrode spacing gradually increases while the electrode array's center point remains fixed. However, as the current electrode spacing grows, the current penetrates deeper into the ground, and the apparent resistivity reflects the resistivity of the deeper layers as well. The resistance is estimated as the ratio of potential difference to current in ohms(Ω). Using a global positioning system (GPS), the absolute coordinates of the survey points (VES) were determined.
Three to five subsurface geoelectrical layers were identified in the research area with the aid of IPI2Win software. Vertical electrical sounding (VES) data are often interpreted using the IPI2Win software, which is a user-friendly geophysical software designed to process resistivity data and generate one-dimensional models of subsurface layers Layer resistivity and thickness were estimated using the software by iterating the model with the observed field data acquired using the Schlumberger array. The H-type sounding curve is the most dominant among the identified curve types.
The interpreted data were used to determine parameters including Depth to Bedrock, Transverse Resistance, Longitudinal Conductance, Reflection Coefficient, and Layer resistivity. Using scaling criteria, the longitudinal conductance was used to determine the aquifer protective Capacity (Vulnerability), and the result revealed the dominance of moderate vulnerability across the study area.
The groundwater potential zones in the research area were characterized based on the following criteria as established by previous authors in this field: Areas with overburden thickness ≥ 30m and reflection coefficient < 0.8 were classified as very high groundwater potential; Areas with overburden thickness ≥13m and reflection coefficient < 0.8 were classified as High groundwater potential; Areas with overburden thickness ≥ 13m and reflection coefficient ≥ 0.8 were classified as moderate potential while areas with overburden thickness <13m and reflection coefficient ≥ 0.8 were classified as low potential, and finally, areas with overburden thickness >13m and reflection coefficient < 0.8. were classified as very low potential. These criteria were written as Python codes that classify the area into five groundwater potential zones. The area covered by each zone was calculated after the geospatial analysis: the very high GPZ occupies about 19.70% of the study area, high 20.30%, moderate 20.0%, low 19.74%, and very low 20.31%.
Ordinary Kriging (OK) interpolation algorithm was used to generate the layer resistivity map, layer thickness map, depth to bedrock map, aquifer vulnerability map, and groundwater potential zone maps using the smart-map QGIS plugin. Smart-Map is a QGIS plugin that allows the generation of interpolated maps in the QGIS environment. Kriging is an unbiased linear interpolation technique that uses a weighted average of nearby samples to estimate unknown values in specific areas. It is deemed the best interpolation method for spatially varying data. For this study, the resistivity (VES) data was randomly distributed over a large area, and the sampling distance between one VES data and the other ranged from 0.5km to 10km.
This study evaluated groundwater parameters in the study area based on the geo-electric properties of the earth material. The results reveal that weathered/fractured basement and sandstone formations in the study area are substantial aquifer systems that host potable water. Data from some drilled boreholes across the study area were used to cross-validate the VES results against borehole log records. This knowledge aided in a better understanding of aquifer disposition, vulnerability, and potential consequences. The study's findings will provide a geo-database for groundwater potential zones in the Federal Capital Territory (Abuja), with significant implications for sustainable groundwater resource design and management.
“Hyperspectral Remote Sensing Data Analysis for Oil Palm and Nipa Palm Plantation Using EnMAP-Box open-source plugin on QGIS”
Jirawat Daraneesrisuk;
Academic Track (Oral)
Spaceborne hyperspectral data can assist in estimating crop yields, predicting crop outcomes, and monitoring crops, which ultimately contributes to loss prevention and food security. Recently, EnMAP hyperspectral imagery has become available, starting from 2022. This study aims to analyze and classify oil palm and nipa palm plantations using hyperspectral images combined with machine learning algorithms. The Random Forest classifier, CatBoost classifier, and LightGBM classifier were utilized to automatically map the oil palm and nipa palm areas. The fully workflows of the hyperspectral imaging process were performed in EnMAP-Box plugin on QGIS software. The overall accuracy of three ML classfiers provides greater than 90% especially in oil palm and nipa palm plantations. Machine learning can find out the hidden information about spectral characteristics.
“Implementation and visualization of a digital twin system for urban noise prediction”
Haneul Yoo;
General Track
As the maturity of digital twin technology increases over time, there is an increasing demand to implement various services (especially those related to decision support) that utilize invisible phenomena/analysis information by visualizing various types of sensor data or the results of professional analysis/prediction together, from three-dimensional visualization using only buildings/terrain.
In particular, Korea aims to provide city-scale housing with a quality living environment, and one of the most complained about items is noise, and there is a strong desire to utilize noise prediction analysis to derive planning/design results in urban planning/design work.
In this presentation, we will introduce a digital twin system that combines 3D spatial information and noise predictive modeling using "OGC standard CityGML" and "open source mago 3DTiler and mago 3DTerrainer developed by Gaia3D" to support urban noise analysis and decision-making.
“Implementing ETL Processes with NDJSON for Spatial Data Integration”
Athitaya Phankhan, Chanakan Pangsapa;
General Track
Effective data management is essential for maximizing the value of spatial data in today’s data-driven landscape. This presentation provides an overview of implementing ETL (Extract, Transform, Load) processes using NDJSON (Newline Delimited JSON) for efficient spatial data integration. We will discuss the importance of robust data management, the benefits of using NDJSON for handling large and complex spatial datasets, and the practical applications of this approach.
Key topics include the steps of the ETL process with NDJSON, from extracting spatial data from various sources, transforming it into usable formats, to loading it into databases such as MongoDB and Elasticsearch. We will highlight the efficiency gains and flexibility provided by NDJSON in streaming and processing spatial data. Additionally, we will cover real-world use cases and best practices for optimizing spatial data integration with NDJSON.
Attendees will gain practical insights into the strategic and technical aspects of utilizing NDJSON in ETL processes, enabling them to implement effective spatial data integration within their organizations.
“Innovative Urban Cooling: Leveraging DART to Mitigate the Heat Island Effect in Trichy, Tamil Nadu”
Salghuna N N, Jyothish Jayan;
General Track
The Urban Heat Island (UHI) effect, which results in elevated temperatures in urban areas compared to their rural counterparts, presents significant challenges to the livability and sustainability of cities, particularly in tropical regions. This study focuses on the city of Tiruchirappalli (Trichy), Tamil Nadu, leveraging the advanced capabilities of the Discrete Anisotropic Radiative Transfer (DART) model to conduct a detailed analysis of the UHI phenomenon and explore potential mitigation strategies. DART model, being an open-source tool, offers significant advantages for urban climate research and planning.
By utilizing DART's sophisticated 3D radiative transfer simulation, we model the complex interactions between solar radiation and urban surfaces, including buildings, vegetation, and water bodies within Trichy. The model integrates multi-spectral and multi-angular remote sensing data, allowing for precise calibration and validation against observed temperature data from ground-based stations and satellite imagery.
Our investigation reveals significant spatial and temporal variations in surface and air temperatures across different urban morphologies and land cover types within Trichy. The study identified hotspots where UHI effects are most pronounced, with temperature differences of up to 6°C between urban and rural areas. High-density built-up areas with low vegetation cover and extensive use of heat-absorbing materials exhibited temperature increases of up to 4°C compared to less dense areas.
To address the UHI effect, we evaluate the efficacy of several mitigation strategies using DART simulations. These strategies include the implementation of green roofs, cool roofs with reflective materials, increased urban greenery through parks and street trees, and the use of permeable pavements. Our results demonstrate that these measures can significantly reduce surface temperatures and improve thermal comfort in urban areas. For example, green roofs and urban green spaces led to a reduction in peak temperatures by up to 3°C, while reflective materials on roofs and pavements contributed to a decrease in heat absorption by up to 25%.
Specifically, the introduction of green roofs and urban green spaces shows a marked decrease in peak temperatures, contributing to lower overall urban temperatures. Reflective materials on roofs and pavements also contribute to a significant reduction in heat absorption. Furthermore, we propose an optimized urban planning framework for Trichy, integrating these mitigation strategies to enhance urban resilience and sustainability.
This comprehensive study underscores the potential of the DART model as a robust tool for urban climate research, providing detailed insights into UHI dynamics and effective mitigation strategies. The findings offer valuable guidance for policymakers and urban planners in Trichy and similar tropical cities, aiming to create cooler, more sustainable urban environments. Through the integration of advanced radiative transfer modeling and practical urban design interventions, this research contributes to the broader goal of mitigating the adverse impacts of urbanization and climate change.
“Interactive Simulation for Visualizing Bus Locations Using GTFS Data”
Kei Yamazaki;
General Track
This proposal utilizes the FOSS4G toolset to encourage bus ridership and enhance the sustainability of urban transportation systems. It specifically showcases an interactive simulation that employs General Transit Feed Specification (GTFS) data to dynamically represent bus positions on a map, corresponding to specific dates and times. This system is ingeniously crafted with FOSS4G tools to enable straightforward tracking of buses' current and planned routes.
The primary objective is to assist users in leveraging bus services more efficiently and effectively through the visualization of transit data. The system facilitates easy travel planning and diminishes the anxiety associated with bus usage, thereby fostering a shift towards more sustainable modes of transport. Additionally, it aims to ameliorate urban traffic flow and contribute to reductions in CO2 emissions, demonstrating how FOSS4G technologies can address social challenges and support the creation of sustainable and resilient cities.
“Introduction to istSOS4 and SensorThings API”
Massimiliano Cannata;
Workshop Proposals
istSOS (http://istsos.org) is a software that has been designed to support sensor data management, from collection to management and quality assessment to dissemination using OGC and ISO standard formats. Following the evolution of software libraries, hardware technologies and IoT wide adoption, istSOS has been reimplemented in its version 4: named “Things”. Taking its tradition of being a Python implementation OGC compliant it takes advantage of latest solutions to support the Sensor Things API (STA) specification.
At the end of the workshop participants will understand the principles of the istSOS4 and of the STA standard; will be able to setup an istSOS4 STA service and will learn how to interact with the service both as a consumer or producer, using supplementary interfaces or pure python code.
“Iteration-free methods for Earth observation data time-series reconstruction”
Davide Consoli;
General Track
Clouds, atmospheric disturbances, and sensor failures influence the quality of Earth observation (EO) data and satellite images, in particular. Many modeling techniques and statistical analysis, to be applied to EO data, require the detection and removal of such aberrations. However, the data gap created after removing the involved pixels needs to be imputed with numerical values that resemble the expected noncorrupted ones. Several imputation, or gap-filling, methods available in literature are based on time-series reconstruction, working only on the temporal dimension of each pixel to input the missing values. Such methods, compared to alternative ones that also consider spatial neighbor pixels or data fusion with other sensors, have the advantage of maintaining the same spatial resolution and spectral consistency in the imputed data.
In contrast with methods that only work with a local temporal window, some of these methods take advantage of the whole time series of each pixel to reconstruct each missing value, allowing the full reconstruction of each gappy time-series. Nevertheless, such methods, like most recent image propagation or linear interpolation, often require an iterative search of available values along the time-series. When the time-series is composed of several samples and/or the involved number of pixels is large, the application of such methods leads to prohibitive computational costs.
We present in this work a computational framework based on discrete convolution that numerically approximates such methods and does not require iterating over the time-series to be applied [1]. In addition, the framework flexibility allows the application of different time-series reconstruction methods by only adapting the convolution kernel. The framework has been used to reconstruct the PetaByte scale Landsat Analysis Ready Data (ARD) collection provided by the Global Land Analysis and Discovery team (GLAD) [2]. New research fronts include the extension of the method to data-fusion approaches that combine time-series of multiple sensors to maintain the highest spatial resolution while also using the temporal information provided by all the sensors. The code, developed in Python with a C++ backend to guarantee usability and high computational efficiency, is openly available at https://github.com/openlandmap/scikit-map.
[1] Consoli, Davide & Parente, Leandro & Simoes, Rolf & Murat, & Tian, Xuemeng & Witjes, Martijn & Sloat, Lindsey & Hengl, Tomislav. (2024). A computational framework for processing time-series of Earth Observation data based on discrete convolution: global-scale historical Landsat cloud-free aggregates at 30 m spatial resolution. 10.21203/rs.3.rs-4465582/v1.
[2] Potapov, Peter & Hansen, Matthew & Kommareddy, Anil & Kommareddy, Anil & Turubanova, Svetlana & Pickens, Amy & Adusei, Bernard & Tyukavina, Alexandra & Ying, Qing. (2020). Landsat Analysis Ready Data for Global Land Cover and Land Cover Change Mapping. Remote Sensing. 12. 426. 10.3390/rs12030426.
“JSON Style Map: Enhancing Flexibility and Efficiency in Map Data Visualization”
PEERANAT PRASONGSUK, sattawat arab, Arissara Sompita;
General Track
In today's rapidly evolving field of map data visualization, the use of vector tiles is increasingly prevalent. Vector tiles offer flexible and efficient data display, but they require JSON Style data to define their visual representation. JSON Style plays a crucial role in formatting data, ensuring that presentations are both diverse and user-friendly.
“kari-sdm: Advanced Species Distribution Modeling using PyTorch and scikit-learn”
Lee, Jeongho, Byeong-Hyeok Yu, Chunghyeon Oh, Soodong Lee, Cho Bonggyo;
General Track
Species Distribution Modeling (SDM) is a statistical methodology used to predict the spatial and temporal distribution of species based on environmental conditions that are conducive to their survival and reproduction. This modeling approach leverages spatially explicit species occurrence records alongside various environmental covariates, including climate, terrain, and land cover, as input variables, with the aim of quantifying and mapping species-environment interactions. SDM has become a critical tool in ecological research and conservation biology for understanding and predicting species distribution patterns. A range of machine learning and deep learning techniques can be employed in SDM, such as Logistic Regression (LR), Random Forest (RF), Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Generative Adversarial Network-CNN (GAN-CNN). Despite the availability of these techniques, there is a lack of a comprehensive application that integrates these algorithms for species distribution modeling. To address this gap, this paper introduces a new tool, kari-sdm, which enables users to perform SDM utilizing a variety of techniques. Kari-sdm supports LR, RF, MLP, CNN, and GAN-CNN algorithms, all based on open-source frameworks PyTorch and scikit-learn. Additionally, it facilitates all necessary preprocessing steps, from data collection, cleaning, transformation, spatial preprocessing, and environmental variable selection, to data splitting. The tool also provides functions for model evaluation, result visualization, and cross-validation. The primary goal of kari-sdm is to assist ecologists in modeling species distributions, interpreting results, and developing informed conservation and management strategies.
“Land Surface Temperature (LST) variation for the hill city of Ranchi, India”
Bijay Kumar Das;
Poster Presentations
Land Surface Temperature Variation for the city of Ranchi
Urbanization has led to ample variation in land use/ landcover over the land resulting in differential absorption of solar radiation. This has resulted in micro- variation of temperature over the land mass within the city. Some areas of the city observe a cooler effect while some areas are relatively warmer. This variation in temperature is studied for the hill city of Ranchi over a period of time using Landsat data. For this purpose, two areas of Ranchi are chosen. One area is the core city of Ranchi and the other area is an institutional campus of BIT Mesra, which is separated by a distance of 25 km. Multiple readings are taken to find out the variation in temperature and it is potted with Normalized Difference Vegetation Index (NDVI). A correlation between NDVI and Land Surface Temperature (LST) is established.
“Land Use Detection Using Artificial Intelligence”
Amritesh Hiras, Anuj Sharad Mankumare, Akshith Mynampati, D ARUNA PRIYA;
Academic Track (Oral)
Automating land use surveys in rural areas using advanced AI techniques can significantly enhance the efficiency and accuracy of identifying various land features. This project focuses on utilizing the YOLOv8 framework for land use detection through image segmentation and object detection.
Traditional land use surveys in rural areas are time-consuming and often prone to inaccuracies due to manual methods. Leveraging artificial intelligence, particularly deep learning models, presents a promising solution to streamline this process and improve data reliability. The project addresses the challenge of automating land feature identification, which includes detecting houses, rivers, roads, and vegetation from visual data captured by satellite/drone. Accurate identification of these features is crucial for effective rural planning and development, as it helps in resource allocation and infrastructure development. The limitations of conventional methods, such as the need for extensive human labor and susceptibility to human error, further highlight the necessity for innovative solutions like AI-driven land use surveys.
The primary aim of this study is to develop and train an AI model capable of accurately detecting and segmenting various land features in rural landscapes. By doing so, the project seeks to demonstrate the applicability of AI in enhancing rural development planning and management. The specific objectives include creating a reliable dataset of annotated aerial images, optimizing a deep learning model for high accuracy, and evaluating the model's performance across different types of land features. Ultimately, the project aims to provide a scalable and efficient tool that can assist policymakers, researchers, and rural development planners in making informed decisions.
The methodology involved several key steps to ensure the robustness and accuracy of the AI model. First, a diverse dataset of aerial images was collected, encompassing various rural landscapes with distinct features such as houses, rivers, roads, and farms/vegetation. These images were meticulously annotated using specialized tools to create ground truth data for training and validation. Data augmentation techniques, including rotations, flips, and color adjustments, were employed to expand the dataset and improve the model's generalization capabilities.
The YOLOv8 model was selected for its state-of-the-art performance in object detection and segmentation tasks. YOLOv8's architecture is well-suited for real-time applications due to its balance between accuracy and speed. The model was trained using the annotated dataset, with hyperparameters optimized to enhance its detection and segmentation performance. Training was conducted on a high-performance computing setup, leveraging GPU acceleration to expedite the process.
The results demonstrated the model's high precision in detecting and segmenting land features. The YOLOv8 model achieved notable accuracy metrics across various classes. The segmentation masks generated by the model closely matched the ground truth annotations, indicating its effectiveness in distinguishing different land features.
The findings of this study underscore the potential of AI in transforming rural development practices. The successful application of the YOLOv8 model for land use detection highlights its capability to deliver precise and actionable insights. The practical implications of this project are significant, offering a scalable solution for land survey automation, which can greatly assist policymakers and rural planners. The integration of such AI-driven methodologies can lead to more informed decision-making, efficient resource allocation, and ultimately, the betterment of rural communities.
The study also highlights several challenges and limitations encountered during the project. Data collection in rural areas can be logistically challenging, often requiring collaboration with local authorities and stakeholders. Ensuring the diversity and quality of the dataset is crucial, as biased or insufficient data can affect the model's performance. Additionally, the model's accuracy is dependent on the quality of annotations, which requires meticulous effort and expertise.
Despite these challenges, the project demonstrates that AI can significantly enhance the accuracy and efficiency of land use surveys. The use of deep learning models like YOLOv8 can reduce the reliance on manual methods, providing a more reliable and scalable solution. However, continuous efforts are needed to improve the dataset, address potential biases, and refine the model to handle more complex scenarios.
In conclusion, this project not only advances the field of AI in rural development but also sets a precedent for future studies aiming to leverage AI for similar applications. The integration of AI in land use surveys can revolutionize the way rural areas are planned and developed, leading to more sustainable and efficient outcomes. The success of this project inspires further research and development in AI-driven solutions for rural development, with the potential to make a lasting positive impact on rural communities worldwide.
“Land Use Land Cover Classification Automation Development using Free and Open-Source Software”
Thantham Khamyai;
General Track
It is crucial for enhancing the efficiency and accessibility of land use and land cover monitoring, supporting informed decision-making in urban planning, environmental management, and sustainable development across diverse geographical contexts. This aims to streamline the process of satellite data acquisition, preprocessing, and seasonal LULC classification through the integration of artificial intelligence (AI) models.
The proposed system will consist of two main components: (1) an automated satellite data fetching and preprocessing module, and (2) an AI-driven LULC classification module. The first component will leverage open-source tools to access and prepare satellite imagery from freely available sources, such as Landsat and Sentinel missions. This module will handle tasks including data download, atmospheric correction, cloud masking, and image compositing.
The second component will employ state-of-the-art machine learning algorithms, particularly deep learning models, to perform seasonal LULC classification. The system will be trained on diverse datasets to recognize and categorize various land cover types across different seasons, accounting for temporal variations in vegetation, urban expansion, and other dynamic landscape features.
By automating these processes, the proposed system aims to significantly reduce the time and expertise required for LULC analysis, making it more accessible to researchers, urban planners, and environmental managers. The use of free and open-source software ensures that the developed tools will be widely available and customizable for different geographical contexts and research needs.
This contributes to the advancement of remote sensing applications and supports informed decision-making in land management, urban planning, and environmental conservation efforts.
“LEVERAGING GEOSPATIAL DATA FOR TRACKING WATER FROM SPACE USING PYTHON PROGRAMMING”
J. Indu;
Keynote Talk
Water does not flow according to geographical boundaries but it follows
elevation. Inland waters from rivers and lakes present crucial natural
resources playing an indispensable role in the global hydrological cycle.
Still, their conventional monitoring is constrained by poor spatial
coverage. Though satellites help improve coverage, the hydraulic properties
of rivers often change at a rate faster than the temporal sampling of
satellites. Through this talk, two novel web applications are introduced
for rivers and lakes built using geospatial datasets and python. While the
first shall seamlessly extract time series of water surface area for
rivers, lakes and reservoirs from Sentinel-1 VV polarized SAR data. The
second application shall integrate dynamic lake water extents for improving
lake water surface temperatures, thereby challenging conventional norms.
“Leveraging spatial autocorrelation information of remotely sensed evapotranspiration for mitigating the impact of data uncertainty on hydrological modeling”
Yan He;
Academic Track (Oral)
Global remotely sensed evapotranspiration (RS-ET) products are increasingly pivotal in enhancing the accuracy and scope of hydrological modeling, particularly in regions where traditional ground-based streamflow data are sparse or non-existent. These products play a pivotal role in understanding the dynamics of the climate-soil-vegetation system, where evapotranspiration constitutes a substantial portion of water loss following precipitation events. Their extensive spatial coverage and accessibility have significantly expanded the capability to predict hydrological dynamics in ungauged basins, offering insights that were previously inaccessible through in-situ observations alone.
Despite their benefits, RS-ET products are tempered by inherent uncertainties, primarily stemming from biases that vary across datasets and geographical regions. These biases manifest as either overestimation or underestimation compared to ground truth measurements, posing challenges for the accurate calibration of hydrological model. Traditional approaches in hydrological modeling commonly utilize absolute ET values directly derived from RS-ET products for model calibration, without accounting for potential biases. However, the reliability of such direct calibrations is contingent upon the quality and accuracy of the RS-ET data, which remains uncertain in many cases.
To address these challenges, this study shifts the focus from absolute ET values to utilizing spatial structural information embedded within RS-ET data, particularly emphasis on spatial autocorrelation, which refers to the tendency of ET values at nearby locations to exhibit similarities. Employing the local Moran's I index, a spatially weighted autocorrelation statistic that is insensitive to biases, we capture the spatial structure of ET data across sub-basins. Additionally, a composite Kling-Gupta Efficiency (KGE) metric, integrating absolute ET values and spatial autocorrelation information in a weighted manner, is employed for calibrating hydrological models. Three calibration schemes are thus designed to analyze the effectiveness of spatial autocorrelation in hydrological modeling: one focusing solely on absolute ET values, another solely on spatial autocorrelation, and a combined approach. Testing these schemes for hydrological modeling with four RS-ET products in the Meichuan basin—MOD16, GLASS, and SSEBop with large biases, and PMLV2 with minimal bias—the study demonstrates varying effectiveness across these schemes.
For RS-ET products with substantial biases, hydrological modeling using spatial autocorrelation proved to be the optimal solution. It achieved a higher KGE and lower Percent Bias (PBIAS) on simulated streamflow compared to using solely the absolute ET value or the combined approach. Conversely, for RS-ET products with minimal biases, hydrological models calibrated using both the combined approach were considered the preferred solutions. This approach can result in a high KGE, similar to that obtained from spatial autocorrelation information alone, while maintaining a reasonable PBIAS. Therefore, we recommend calibrating hydrological models using both absolute ET values and spatial autocorrelation information in regions where ground ET observations are available. This approach enhances the robustness and reliability of hydrological predictions, mitigating the influence of biases inherent in RS-ET products. In contrast, in scenarios where the quality of RS-ET products is unknown, we suggest calibrating using only spatial autocorrelation information, thereby circumventing potential biases and improving model accuracy under such circumstances.
Moreover, methodologically, the study contributes by demonstrating the efficacy of the local Moran's I index in capturing the spatial structure of ET data within hydrological sub-basins. This geostatistical measure not only quantifies spatial autocorrelation but also identifies clusters and patterns of ET values, thereby enriching our understanding of spatial variability in hydrological processes. Furthermore, the comprehensive analysis of the composite KGE index underscores the significant contribution of spatial autocorrelation information to hydrological modeling, surpassing the influence of absolute ET values in enhancing model performance.
In conclusion, the spatial autocorrelation-based approach presented in this study represents a significant advancement in the application of global RS-ET products for hydrological modeling. By leveraging spatial structural information and mitigating biases inherent in RS-ET data, this approach not only improves the accuracy of hydrological predictions but also enhances the practical utility of RS-ET products in diverse hydrological contexts. Future research directions may explore additional spatial statistical techniques and incorporate a broader array of RS-ET datasets to further refine and validate these findings across different geographical settings and hydrological conditions.
“LLM (Large Language Model) geospatial python for geospatial analysis in GDAL native environment”
Lawrence Xiao;
Workshop Proposals
We want to advance geospatial data science through first providing an optimal dev ops layer for anyone to build geospatial models/code in a GDAL native environment while being supported by co-pilot or GPT like Large Language Model that is trained/fine tuned on GDAL and geospatial python.
With our proprietary technology being built through entirely serverless architecture, we can significantly reduce costs and increase accessibility to powerful GIS dev ops infrastructure.
“Localization of FOSS4G Tools and Building an Open Knowledge Platform in Japanese University Education”
Shiori Uehara, Aki Sato;
Poster Presentations
Furuhashi Lab has been working on OSM mapping and Mapathon as YouthMappers AGU under the theme of "Participatory Mapping and Social Contribution". Here is a look back at our specific activities in 2024. Three months from March to June, we participated in the OSM Validation training of UN MAPS. We achieved the promotion of all 12 students in our lab to intermediate OSM Mappers. Based on the knowledge learned, we also created a graphic recording about JOSM Validation and published it on GitHub. In April, we participated in "International Humanitarian Mapathon 2024" and competed with universities and organizations from more than 5 countries including USC and UCLA. In June, we held Wheelmap's Mapathon to learn how we can use maps to contribute to society. We are also working on the translation of "Open Mapping towards Sustainable Development Goals" as part of our year-round activities to promote the activities of YouthMappers. We are also planning to participate in other Mapathons and Hakkathons in the future.
Throughout our year-long activities, we have been faced with the challenge that there is a large gap in understanding depending on the amount of knowledge and language level of individuals. As newcomers to the geospatial information industry, we had little prerequisite knowledge and were unfamiliar with tools such as the QGIS manual, GDAL, and JOSM, which are commonplace for advanced mappers. The most difficult thing for us Japanese was that the manuals for understanding these tools were mostly in English, and we found ourselves in a situation where we could not understand them even if we read them because of their many technical terms. It was not easy to keep the manuals close at hand and look at the actual screens and operate them at the same time.
For this reason, this presentation will introduce the usefulness of translation and visualization for problems such as unfamiliarity with computer operation, inability to understand manuals due to lack of knowledge in the field, and resistance to learning in a language other than one's native tongue. In particular, we recognize that overcoming language barriers is of paramount importance. As examples, we will discuss the translation of the QGIS manual and GDAL, and the creation of a graticule for the JOSM Validation Training. We will then publish those deliverables on GitHub to create an open knowledge platform.
First of all, in the rapidly evolving field of geospatial technology, access to comprehensive and understandable documentation is crucial for both new and experienced users. However, language barriers often limit access to valuable resources. To bridge this gap, students from the Furuhashi Lab at Aoyama Gakuin University's "Applied Spatial Information Science III" course are working to localize technical documents for FOSS4G (Free and Open Source Software for Geospatial) tools such as QGIS and GDAL. These tools are widely used for geospatial data manipulation, analysis, and visualization, but much of their documentation is predominantly available in English. By translating these documents into Japanese, we aim to increase accessibility for Japanese-speaking users and contribute to a deeper understanding of geospatial technologies.
Our approach in the course begins with understanding the functionalities of QGIS and GDAL, followed by practical exercises to familiarize participants with basic operations. This practical experience forms the foundation for translating technical documents, helping participants effectively understand the content. We use tools such as Transifex for collaborative translation efforts, ensuring consistency and accuracy across documents. However, the current complexity of registering an account on Transifex poses a challenge. To address this, we have created a Markdown-based "QGIS Documentation Japanese Translation Manual" within a GitHub repository, where students document the steps and share insights, including potential pitfalls. This helps in facilitating collaborative information sharing.
The content of the guide follows the format outlined by the Japan Translation Federation (JTF)’s “Translation Guidelines,” which is essential for the success of translation projects involving open data. By building an open knowledge platform using GitHub, both users and instructors can better understand the tendencies that beginners may encounter with these tools. The FAQ and other resources on this platform allow participants to easily create, edit, and publish markdown documents, helping them mentally simulate the actual working environment. Furthermore, gaining this experience helps foster a culture of open knowledge sharing within the academic community, where students can exchange the skills needed to effectively manage digital documentation.
Regarding GDAL, we focus on translating .po files within GitHub.
This project demonstrates that localization and open knowledge platforms can bridge the gap between technology and language, serving as a gateway to fostering geospatial literacy. We aim to share this project at the FOSS4G International Conference, contributing to the geospatial community and promoting more accessible geospatial information literacy.
Second, Furuhashi Lab continues to input data into OpenStreetMap for emergency rescue efforts and as a contribution to areas without maps.
Creating and providing accurate maps requires not only proper instruction but also mastery of the editing tools used. In addition, using JOSM is an efficient way to input and validate huge amounts of data in OSM without errors.
JOSM (Java OpenStreetMap Editor) is an advanced OSM desktop editor, written in Java, that only works on Windows and Mac. And the printed manual is difficult for beginners to understand, and they often have trouble even getting the tool to work in the first place.
On the other hand, Visual information has the advantage of overcoming language barriers and differences due to prerequisite knowledge and can convey information intuitively. Furuhashi Lab uses graphic recording method as a means to achieve this.
Twelve students from Furuhashi Lab participated in the "OSM Data Validation Training Proposal" sponsored by UN Mappers over a three-month period from March to June. However, students who were not used to working with computers had a hard time just installing the system, and most of the students who participated actually faced problems. The graphic recording was created in such a situation. The graphic recording created in this situation did not capture the essence of the lecture, so we had to redo it. Afterwards, the video was reviewed and newly redrawn, which is now available to the whole world on GitHub.
Using the example of the graphic recording at JOSM Validation, I will introduce the usefulness of visualization in Japanese university education.
“Mapping land suitability for sugarcane crop with fuzzy AHP and multi-criteria evaluation”
Piyanan Pipatsitee;
Academic Track (Oral)
Mapping land suitability is a critical approach for identifying appropriate land use for site selection and land-use planning. However, climate changes tend to exacerbate water shortages and droughts, significantly affecting land suitability and resulting in decreased crop yields, especially sugarcane. Although the suitability of land is typically evaluated based on multiple criteria such as soil properties, topography, climate, and socioeconomic factors, it is important to incorporate drought conditions into land suitability mapping to mitigate the influences of climate change on crop yields. Therefore, this study aimed to map sugarcane land suitability with fuzzy AHP and multi-criteria evaluation approaches in the Northeast region of Thailand. The present study selected six significant criteria for sugarcane land suitability mapping: the ETDI as agricultural drought index, slope, soil texture, distance from the river, distance from the road and distance from the sugar mill. The ETDI, serving as a drought index, was assessed by calculating the difference of spatial PET and actual evapotranspiration (AET). Spatial PET was analyzed from a PET estimation model based on integrated Global Navigation Satellite System-derived Precipitable Water Vapor (GNSS-PWV), processed using the goGPS open-source software, along with satellite based-MODIS land surface temperature product (MODIS LST). Concurrently, the spatial AET was derived from the Surface Energy Algorithm for Land (SEBAL model), utilizing GRASS GIS open-source software. Subsequently, land suitability for sugarcane cultivation was evaluated by integrating the fuzzy analytic hierarchy process (fuzzy AHP) and multiple criteria evaluation approaches. The results indicated that two factors affected sugarcane cultivation: the ETDI and distance from river. The ETDI was the most significant factor for sugarcane cultivation, with an average weight of 0.66. Additionally, the distance from the river was identified as the second essential factor, with an average weight of 0.34. It is evident that other factors, comprising slope, soil texture, distance from the road, and distance from the sugar mill, were observed to have no influence on the suitability of land for sugarcane cultivation. The spatial distribution of these factors remained consistent throughout the entire study area. Moreover, it was observed that suitable areas for sugarcane were mostly found in the moderately suitable class (S2; 49.6%), followed by the marginally suitable class (S3; 36.0%) and the highly suitable class (S1; 11.2%). Additionally, actual sugarcane cultivation areas were mainly distributed in the S3 class (49.0%), followed by 43.2% in the S2 class and 6.7% in the S1 class. The S3 class areas were mostly concentrated in Wang Sam Mo district, Udon Thani province (129 km2), which showed a sugarcane yield of approximately 60.6 tons/ha. The S2 class areas were mostly cultivated in Phu Khiao district, Chaiyaphum province (178 km2), which displayed a sugarcane yield of approximately 62.5 tons/ha. However, the S1 class areas were mostly found in Phimai district, Nakhon Ratchasima province (30 km2), with a higher sugarcane yield compared to S2 and S3 classes, reaching 63.6 tons/ha. It was noted that the S2 class areas could potentially be enhanced by implementing irrigation systems and establishing small ponds to reduce the risk of drought. The distance from the river should be limited to within 2 km. This approach could increase the sugarcane yield and promote these areas into the S1 class, expanding S1 class areas by 2.7 folds and raising yields by approximately 1.1 tons/ha (1.8% of the S2 class yield). Furthermore, the present findings indicated that areas classified as S1 exhibit significant potential for identifying suitable locations for further expansion of sugarcane cultivation due to high potential for increasing sugarcane yield and their underutilization for sugarcane cultivation. Consequently, potential areas within the S1 class were analyzed for sugarcane cultivation, with a total area of 6,519 km2. Nakhon Ratchasima province has the greatest potential areas (2,272 km2, 35%), followed by Khon Kaen (725 km2, 11%), Chaiyaphum (592 km2,9%), Udon Thani (519 km2, 8%) and Surin (441 km2, 7%). Such potential areas may be encouraged to change from currently cultivating other crops (rice, corn, and cassava) to sugarcane for optimal resource utilization. However, farmers persist in rice cultivation due to its status as a major crop in the country, intertwined with their traditional way of life. Rice cultivation has a shorter growth period compared to sugarcane, and promptly generates income to cover household expenses. Government policies should support participatory knowledge transfer programs on sugarcane cultivation, ensure sugarcane price guarantee, and facilitate access to credit. Additionally, the relatively high price of sugarcane may incentivize farmers to grow this crop, which lead to a substantial expansion of sugarcane areas and increased yields to support the increasing demand for sugar, both for domestic consumption and export purposes. Further research on a larger scale, covering the entire country, is necessary to improve the accuracy of the land suitability map in addressing the challenges posed by global climate change.
“Mapping Urban Dynamics: The Role of Data Analysis in Shaping Sustainable Cities”
Sarawut ninsawat;
General Track
The integration of vast data analysis and AI technologies in urban planning represents a significant advancement in managing the complexities of modern cities. Through this presentation, it show the vast potential of these technologies to enhance decision-making, optimize resource allocation, and improve urban sustainability. By understanding and applying these technologies, future urban planners can develop smarter, more efficient, and environmentally friendly cities. The practical applications discussed, including traffic injury risk assessment, human mobility analysis, and carbon emission estimation, demonstrate the tangible benefits of leveraging Big Earth Data and AI in urban planning.
“MEASURING COMPACTNESS IN ELECTORAL DELIMITATION: AN OPEN-SOURCE GIS APPLICATION”
Shailesh Chaure;
Poster Presentations
Electoral delimitation is round the corner in India. Statutory provisions of Delimitation Act prescribe geographical compactness as the foremost criterion for delimitation. However, the Guidelines and Methodology of delimitation do not define any methodology for ensuring, evaluating and measuring compactness, and effective implementation of the criterion during delimitation.
Compactness ensures better connectivity, communication, public convenience, accessibility and easy movement for the stakeholder population. Delimitation authorities across the world employ varied measures of compactness for evaluation of alternative plans. These are mathematical functions which quantify the irregularities in the shapes and population distribution in the constituencies. These have been acknowledged as a significant check on arbitrariness in the process of redistricting.
An open source geospatial tool has been developed in QGIS 3.16 for computation and evaluation of compactness of selected representative pre and post delimitation assembly constituencies (ACs) of Rajasthan. Four indices - Gibbs, Polsby and Popper (Cpp), relative moment of inertia and normalized mass moment of inertia (NMMI)) - have been identified which model the dispersion, boundary irregularity and population distribution aspects of compactness, their performance has been compared and an appropriate combination of measures has been proposed.
The input spatial data includes multi-level administrative maps of the ACs joined with population attributes, and pre and post delimitation AC boundary vector files. A QGIS Python script has been developed which calculates the point and polygon features required for various measurements of selected indices, and returns the numerical values of the indices in ASCII text files.
The results closely correspond to the visual expression of compactness of the ACs. The open source tool can be employed for delineating geographically compact constituencies. Alternative plans of electoral boundaries can be evaluated for compliance to the prescribed guidelines, effectively reducing arbitrariness in the final plan and enhancing transparency and objectivity in the process in India.
“Mode Choice for Urban Poor: Every saved Rupee counts”
Anjali Pathak;
Academic Track (Oral)
Sensible distribution of limited income to maintain the livelihood of urban poor household is the inherited financial management acquired by the marginal sections of the developing countries. Although food and housing are the heads of the largest expenditure, the cost incurred on transportation is significant which is optimized by choosing the place of work and residence. This is further minimized by choice of transportation system. This paper examines the choice of mode of transport for the city of Kanpur (India), by the urban poor’s while commuting to workplace. The methodology adopted is the structured questionnaire survey among the urban poor residents and correlating with mode choice. The average monthly fare incurred through various means of transportation is proportionated with the average monthly income of the household. This research will give insight for planning of cost-effective public transportation system along with para transit which is operated by an individual. It will also give the relative importance of walkability, a bicycle friendly city and mode sharing facility.
“MSpace.E: Advanced Urban Environment Simulation Platform”
NGUYEN VAN THIEN, Hirofumi Hayashi, iizukatoshiaki, Hirosawa Kunihiko;
General Track
“toeng.net”, which we announced at FOSS4G-ASIA 2023 Seoul, will be launched as “MSpace.E”.
MSpace.E is a comprehensive urban environment analysis platform using 3D city models. This platform integrates simulations of shadows, building surface shadows, noise, and wind, providing an approach to urban planning and environmental assessment.
Key features include 3D visualization using the Re:earth platform, environmental analysis integrated with user-provided construction data in IFC, FBX, GLB, and 3D Tiles formats, and newly added functionality for group management and sharing of analysis results.
Users can analyze the environmental factors within a selected area, and by utilizing the PLATEAU 3D city model, more accurate analysis is possible by taking into account existing building structures. In the case of shadow analysis, users can select analysis options, specify the range, set parameters, and receive the results in the CMZL file format. In addition, users can upload their own construction data and visualize it in 3D on Re:earth. Group management and result sharing functions make team collaboration easier.We also discuss a comparison of the implementation and performance of MSpace.E with the toeng.net prototype published last year.
Application areas include urban planning for optimal building placement and public space design, environmental impact assessment of architectural projects, and energy-saving strategy planning at the urban scale.
MSpace.E is a powerful platform for multi-faceted analysis and visualization of complex urban environments. It enables comprehensive urban environment simulation, strongly supporting decision-making for sustainable urban development.
“Multi-Class Oil Palm Tree Detection Using YOLOv8, YOLOv9, and YOLOv10: A Comparative Analysis”
Aakash Thapa, Teerayut Horanont;
General Track
Southeast Asia (SEA) region leads the world in palm oil production, with Indonesia, Malaysia, and Thailand collectively contributing over 88% of the global production. However, the tropical climate in the SEA region resulted in oil palm trees vulnerable to various diseases such as Fatal Yellowing (FY) and Ganoderma boninense. To keep track on productivity, it is crucial to monitor the varying conditions of oil palm trees—such as healthy, dead, yellow, and small—and apply effective pruning techniques to cure affected trees. Manual approach for oil palm tree detection is expensive, tedious, and prone to inaccuracies. Thus, our study is focused to automate the detection of oil palm trees and their states using a deep learning (DL) algorithm on unmanned aerial vehicle (UAV) imagery. We employ the publicly available UAV dataset, named MOPAD, containing training and validation sets, to evaluate the performance of the latest open-source models: YOLOv8, YOLOv9, and YOLOv10. This research contributes to the FOSS4G community by demonstrating the application of open-source models that enhance the accuracy and scalability of geospatial applications in precision agriculture, addressing real-world challenges.
“Mysuru 2034: An Integrated Geoinformatics Approach for Real Estate Valuation and Urban Growth”
CHANDAN M C, Shreyanka M, Nikitha K, Tejashvi Swamy, Pramath Rathithara HP;
Academic Track (Oral)
Over recent years, Mysore, a district in Karnataka, India, has seen remarkable urban growth and infrastructural development, transforming its landscape significantly. This study examines how this urban expansion influences property values, using data from 2014 and 2024 to forecast property values for 2034 with a Random Forest regression model. We focus on 110 key locations, looking at factors such as closeness to the central business district, railway station, bus stand, and local amenities like schools and hospitals. By finding the strongest correlations between these elements, we establish a relationship between property values and these factors to predict future values. Our findings highlight Mysore's vibrant economic growth and its potential for sustained progress. These insights are crucial for the real estate market, providing valuable information to make informed decisions about future property values amid ongoing urban development. By analyzing how urban growth impacts property values through sophisticated statistical models, this study sheds light on how infrastructural improvements and strategic locations drive real estate trends. The expected significant rise in property values by 2034 underscores Mysore's economic dynamism and its appeal as an emerging urban hub. We conducted a thorough analysis of various factors affecting property values, focusing on proximity to essential services and transportation hubs. These elements significantly influence property desirability and accessibility. Our use of the Random Forest regression model enables accurate predictions of future property values by understanding complex relationships between these variables. The strong correlation between guideline values and market values provides a reliable basis for predicting future real estate trends. This correlation is essential for stakeholders, including developers, investors, and policymakers, as it supports strategic decision-making based on market projections. The expected significant rise in property values indicates that Mysore is poised for considerable growth, driven by strategic developments and improved infrastructure. By understanding these trends, stakeholders can make informed decisions to capitalize on Mysore’s ongoing urban expansion, ensuring that investments and development strategies align with the city's projected economic vitality and growth potential. Our analysis highlights that proximity to the central business district, bus stops, and railway stations are key determinants of property values, greatly influencing market prices. We project a significant increase in property values, estimating a 118% rise by 2034. To visualize these future values, we employ Voronoi polygons, which offer a clear spatial representation of the predicted property value distribution. This approach provides stakeholders, including developers, investors, and policymakers, with valuable insights into future market trends. By understanding the impact of these location factors, they can make informed decisions regarding investments and development strategies. The anticipated rise in property values underscores the ongoing urban development and economic growth in Mysore, highlighting its potential as a thriving urban center. Our findings underscore the importance of strategic infrastructure development in driving property market dynamics and guiding future growth in the region.
In summary, this study provides an in-depth analysis of the relationship between urban growth and property values in Mysore. By employing advanced regression models and detailed location-based data, we have developed a robust forecast for property values in 2034. Our findings indicate a significant projected increase in property values, highlighting Mysore's continuous development and potential for future growth. These insights are essential for the real estate market, offering valuable guidance for future investments and development strategies in Mysore. The study emphasizes the impact of key factors such as proximity to the central business district, bus stops, and railway stations on property values. By understanding these dynamics, stakeholders, including developers, investors, and policymakers, can make informed decisions to navigate the evolving real estate landscape. The anticipated rise in property values underscores Mysore’s economic vitality and its promise as a thriving urban center, driven by strategic infrastructure development.
Keywords: Mysore, Urban growth, Property values, Regression model, Infrastructure, Stakeholders
“N-ViewAR : Visualize what’s beneath and manage beyond”
Santosh Gaikwad;
General Track
The effective management of underground utilities is crucial for urban infrastructure development and maintenance. The visualization and management of underground utilities have always been a significant concern, especially in countries like India, due to inaccurate and insufficient information. Traditional 2D mapping methods often fail to capture the complexity and spatial relationships of subterranean systems, leading to challenges in planning, maintenance, and damage prevention. Appropriate visualization is therefore crucial to avoid haphazard digging, accidents, and unnecessary damage. Visualizing underground utilities is challenging due to the complexity and invisibility of subterranean infrastructure. However, recent advancements in 3D visualization technology offer the potential to create immersive and realistic visual representations of data both above and below the ground.
Nascent Info Technologies has developed a mobile application named N-ViewAR, centered around 3D visualization using open-source technologies. This augmented reality mobile GIS app serves municipal corporations, field officers, private utility companies, and others by eliminating the need to carry physical maps on-site and preventing haphazard digging. Users can seamlessly visualize the physical environment while superimposing 3D models of underground utilities as 3D tiles generated using the pg2b3dm utility, allowing them to see what lies beneath the surface without physical excavation. This feature aids in making informed decisions while ensuring the safety and integrity of assets. The application also provides relevant ancillary details such as depth, diameter, and material, reducing the risk of accidental damage. The intuitive 3D interface requires minimal training for field officers and other users, ensuring a smooth transition and quick adoption.
N-ViewAR represents a pioneering solution to underground utility challenges in India, powered by advanced GIS and AR technology.
Keywords: 3D visualization, Underground utilities, Urban infrastructure, mobile application, GIS and AR
“New Way Using H3 to Manage GIS Data”
Tanaporn Songprayad;
General Track
Managing Geographic Information Systems (GIS) data with H3 (H3Geo) is an efficient and modern method for handling and analyzing geographic data. H3 uses a hexagonal grid system that offers special features, allowing data to be stored at resolution levels from 0 to 14, which helps in dividing and storing data effectively.
“On the performance of distributed rendering system for 3DWebGIS application on ultra-high-resolution display”
Tomohiro KAWANABE;
Academic Track (Oral)
Introduction
With the spread of IoT and the increasing resolution of observation sensors, the total amount of geospatial information data is increasing exponentially daily. On the other hand, the increase in resolution of display devices used to analyze and visualize these data is reaching its limit due to various physical constraints. The maximum resolution of commercially available display devices is 8K; 4K or 5K is considered the upper limit for desktop use.
Using the OS's multi-display function or a tiled display driver provided by the GPU manufacturer, it is possible to create a display environment with an even larger area and higher resolution. However, the middleware provided by the GPU manufacturer currently has a maximum resolution limitation of 16K [1], which is the maximum resolution that can be achieved on a single PC.
However, even if these mechanisms are used to create an ultra-high-resolution display environment, it is only possible to render data within the web browser's heap memory limit in the case of WebGIS applications. For example, the 3DWebGIS viewer provided by the Tokyo Digital Twin Project [2] cannot render 3DTiles [3] building data for all 23 wards of Tokyo at once (textured building data is used for areas provided with texture).
In this paper, we introduce ChOWDER, a web-based tiled display driver that enables distributed rendering of 3DWebGIS content across multiple web browsers, as a solution to the above problems and report the results of memory load balancing experiments using ChOWDER for distributed rendering.
Proposal of a distributed rendering method for 3DWebGIS
One possible solution to the above problems is to distribute the display of one WebGIS content across multiple PCs (multiple web browsers). This makes it possible to display a WebGIS at a resolution that exceeds the upper limit of a single PC (web browser) and distributes the memory load required to display the content across each PC (web browser).
The scalable display system ChOWDER[4][5], jointly developed by RIKEN Center for Computational Science and Kyushu University, is an open-source tiled display driver that can create an ultra-high-resolution pixel space by arranging multiple displays that display a web browser in full-screen mode in tiles. It also supports distributed rendering of 3DWebGIS.
This function uses iTowns[6], an open-source 3DWebGIS, as middleware. iTowns uses Three.js as a WebGL rendering library, and Three.js has an API that can offset the view frustum[7].
The view frustum must be split appropriately to split and display 3D content on multiple display devices. ChOWDER uses the view frustum offset API of Three.js to split a single iTowns content into multiple view frustums, enabling multiple web browsers to split and render 3DWebGIS content [8].
However, at the time of the previous report [8], when iTowns executed a 3DTiles load command, it loaded all the data without judging whether it was inside or outside the view frustum range, so distributed rendering did not improve memory utilization efficiency. Since then, the 3DTiles load process was improved in iTowns Release 2.42.0; in this paper, we measured the amount of heap memory consumed by each browser when iTowns content was distributed and rendered using ChOWDER on multiple web browsers and confirmed the memory load distribution achieved by this method.
Experimental procedures and results
The experimental data used was the textured building data for Chiyoda, Minato, and Chuo wards in Tokyo, from the 3DTiles data distributed by the PLATEAU project [9] of the Ministry of Land, Infrastructure, Transport and Tourism of Japan.
The experiment first displayed the 3DTiles building data for the above three wards in full screen on a single 4K resolution display using iTowns on ChOWDER. The heap memory size of the web browser at this time was 268MB.
Next, the same content was displayed on a ChOWDER distributed display consisting of four 4K displays arranged in two horizontal and two vertical rows. Each display had a full-screen web browser. The heap memory sizes of each web browser were 133MB, 188MB, 68.3MB, and 37.7MB.
Finally, we conducted an experiment using nine 4K displays arranged in three rows and three columns. The heap memory sizes of each web browser were 66.8MB, 122MB, 140MB, 84.3MB, 87.2MB, 56.9MB, 41.2MB, 38.4MB, and 33.6MB.
From these experimental results, it can be said that distributed rendering of 3DWebGIS using ChOWDER achieves memory load balancing.
During distributed rendering, the heap memory size of each web browser is different because the amount of 3DTiles data contained in each responsible drawing area is different. Also, the total heap memory size of all browsers is larger than when rendering in a single browser because iTowns loads 3DTiles data that is wider than its view frustum, and data loading in overlapping areas occurs during distributed rendering.
Future work and conclusion
In this experiment, we measured the web browser's heap memory, but did not measure GPU memory consumption. However, because 3DWebGIS uses WebGL for rendering, we believe that a more precise evaluation can be made by measuring GPU memory consumption as well.
In addition, since distributing rendering across more web browsers is expected to further distribute memory load, we plan to conduct experiments by increasing the number of distributed displays.
In this paper, we have shown the limitations of current 3DWebGIS when the data to be displayed increases, and proposed a distributed rendering method as a means to solve this problem, and introduced the view frustum offset API of Three.js, iTowns, a 3DWebGIS to which it can be applied, and ChOWDER, a web-based tiled display driver that incorporates them, as a means to realize this method. Furthermore, we have presented the results of an experiment that shows that memory load distribution is achieved by distributed rendering using these and demonstrated that this method is one solution to the increase in data to be displayed in 3DWebGIS.
References
[1] Limitations. About NVIDIA Mosaic. https://www.nvidia.com/content/Control-Panel-Help/vLatest/en-us/mergedProjects/nvwks/SLI_Mosaic_Mode.htm Accessed July 29, 2024.
[2] Tokyo Digital Twin Project. https://info.tokyo-digitaltwin.metro.tokyo.lg.jp/ Accessed July 29, 2024.
[3] 3DTiles. The open specification for 3D data. https://cesium.com/why-cesium/3d-tiles/ Accessed July 29, 2024.
[4] Kawanabe, T., Nonaka, J., Hatta, K., & Ono, K. (2018, September). ChOWDER: an adaptive tiled display wall driver for dynamic remote collaboration. In International Conference on Cooperative Design, Visualization and Engineering (pp. 11-15). Cham: Springer International Publishing.
[5] ChOWDER GitHub repository. https://github.com/SIPupstreamDesign/ChOWDER Accessed July 29, 2024.
[6] iTowns (in French). https://www.itowns-project.org/ Accessed July 29, 2024.
[7] three.js API Reference. https://threejs.org/docs/#api/en/cameras/PerspectiveCamera.setViewOffset Accessed July 29, 2024.
[8] Kawanabe, T., Hatta, K., & Ono, K. (2020, September). ChOWDER: A New Approach for Viewing 3D Web GIS on Ultra-High-Resolution Scalable Display. In 2020 IEEE International Conference on Cluster Computing (CLUSTER) (pp. 412-413). IEEE.
[9] Project PLATEAU portal site (in Japanese). https://www.geospatial.jp/ckan/dataset/plateau Accessed July 29, 2024.
“Open or Perish”
Cannata Massimiliano;
Keynote Talk
The research assessment is traditionally based on the evaluation of criteria such as the number of peer-reviewed publications, impact factor, and number or amount of grant funding. Unfortunately, this approach has been proved to deeply influenced the way of conducting research that focus on quantity rather then quality and not . To maximize the impact of research as a practical mean to address societal challenges, a new approach, named Open Science, has been endorsed worldwide by major funding agencies over the last decade as the new research pathway. Quality and impact, collaboration and sharing, diversity and equity, transparency and efficiency has become the new paradigms to be pursued.
To foster the adoption of the Open Science funding agencies are acting on two fronts: on one hand, by influencing policies and requiring the adoption of open science practices as a condition for funding access (Open Access, Open Data and Citizen Science), and on the other hand, by focusing on incentives and exploring new methods for evaluating scientific results.
It is clear that in the near future, the current “publish or perish” aphorism is shifting towards “open or perish” to describe the required work to succeed in an academic career. But how should a modern researcher act and comply with this new paradigm? It essential for her/him to understand the best practices that guarantee the recognition of her/his achievements by connecting the researcher, the publications, the software and the data. In this talk, an introduction of these best practices is addressed with the aim of sustain the Open Science adoption with particular reference to Open Software. Finally, new possible approaches envisioned for the evaluation of project proposals, career advancements and institutions assessment are presented and discussed.
“Open spatial data in Thailand Higher Education Context - Classroom to Daily Life”
Chomchanok Arunplod;
General Track
Open spatial data plays a transformative role in the landscape of higher education in Thailand, bridging the gap between theoretical learning and practical application in daily life. This keynote address will explore how open spatial data is being integrated into the higher education curriculum, emphasizing its significance in enhancing students' understanding of geography, urban planning, environmental management, and related disciplines.
Illustrating how open spatial data is increasingly influencing daily life in Thailand through community-driven mapping projects, public health initiatives, or sustainable development planning, open spatial data is empowering. The session will also address the challenges and opportunities in the widespread adoption of open spatial data within higher education, including issues of data quality and accessibility and the need for ongoing support and collaboration between academic institutions, government agencies, and the private sector. Ultimately, this session aims to inspire educators, students, and professionals to harness the power of open spatial data, transforming education and society at large in Thailand.
“Optimizing Photovoltaic Energy Potential Analysis through Economic Modeling and Open Source GIS Data Integration”
Changyeol Yun;
Poster Presentations
We define the terminology and calculation methods for the potential volume of photovoltaic (PV) energy across South Korea and derive the calculation and mapping results using various open GIS data and software. To estimate the theoretical potential of PV energy in South Korea, we divided the entire country into 100m x 100m grids and performed calculations for each grid. The solar irradiance for each grid was determined using GK-2A (GEO-KOMPSAT-2A) satellite imagery. To assess the feasibility of PV installation, spatial data from various GIS layers were applied to identify suitable areas. We then calculated the possible capacities and annual electricity production by applying the capacity factor of PV systems for each grid, resulting in the technical potential. We evaluated the economic viability by incorporating sociocultural regulations and Renewable Portfolio Standard (RPS) subsidy policies. The Levelized Cost of Energy (LCOE) was calculated for each grid and compared with the combined value of the System Marginal Price (SMP) and Renewable Energy Certificate (REC) to identify economically feasible areas, which were classified as market potential. The analysis utilized over 40 GIS layers, primarily sourced from national open data. Evaluating data suitability and extracting key parameters were the most challenging aspects of this process. This comprehensive approach, which integrates current governmental and municipal ordinances, technical performance indicators, and land-use factors, provides essential metrics for establishing future energy plans in South Korea.
“Predictive Analysis of LULC Dynamics for Area Under Submergence and its Environmental Impacts for the Mekedatu Reservoir Project”
CHANDAN M C, Pooja K, Pratham Goudageri, Vickey Rajendra Hegade, Prithvi Raj Gowda S;
Academic Track (Oral)
Reservoirs play a crucial role in global water resource management, hydroelectric power generation, and flood control. However, their construction often entails significant ecological and socio-economic impacts, necessitating thorough environmental assessments. The Mekedatu Reservoir Project, situated on the Cauvery River in the Ramanagar district of Karnataka, India, holds paramount significance. Aimed at supplying the Bengaluru Metropolitan Region and its surroundings with drinking water, the project also endeavors to generate 400 MW of renewable energy annually. Despite its benefits, the project comes with ecological costs, as approximately 5252.40 hectares of revenue, forest, and wildlife land will be submerged. This necessitates a detailed evaluation of its potential environmental consequences. This study identifies a knowledge gap in the existing literature regarding the ecological implications of the Mekedatu Reservoir Project. It seeks to fill this void by forecasting land use and land cover (LULC) changes for the years 2000, 2010, and 2020 using the Random Forest method, and assessing the submergence area for different levels of the proposed reservoir. Catchment delineation is performed using the Soil and Water Assessment Tool (SWAT). Additionally, the Cellular Automaton-Markov Chain technique is employed to predict land use and land cover changes for the year 2030. Integrating these methodologies, the research provides a holistic understanding of the project's environmental footprint. The land use and land cover analysis revealed significant shifts from 2000 to 2020, with forest cover decreasing from 71.54% to 60.71% and barren land increasing from 19.55% to 29.56%. The projected land use and land cover for 2030 shows further forest reduction to 58.28% and barren land increasing to 31.11%. These changes highlight a trend towards deforestation and land degradation, posing severe ecological threats. The submergence area at the proposed reservoir Full Reservoir Level is estimated to be 5252.4 hectares, distributed as 6.62% water, 19.55% barren land, 71.54% forest area, and 2.29% built-up area for the year 2000. The inundation of these areas will lead to significant biodiversity loss, affecting numerous plant and animal species. In line with Sustainable Development Goals , which advocates for sustainable water management, this study emphasizes the importance of informed decision-making and sustainable development practices. The findings underscore the need for new ecologically sensitive areas and the establishment of wildlife corridors, conservation zones, and afforestation programs to mitigate the adverse impacts. Continuous environmental monitoring and research are essential to track biodiversity impacts and adjust conservation strategies accordingly. Policy implications of this study suggest that due process of law, linked with the principle of natural justice, must be adhered to in ensuring environmental balance. Recommendations from the World Commission on Dams (WCD) highlight the need to reduce the negative impacts of dams by increasing the efficiency of existing assets and minimizing ecosystem impacts. Policymakers must understand the long-term ecological consequences of such mega projects and explore alternatives. Sustainable development models must be based on equality and natural justice. Future research should focus on the socio-economic impacts of the Mekedatu Reservoir Project, particularly the displacement of local communities. This includes conducting detailed socio-economic assessments, inclusive resettlement planning, livelihood restoration programs, and initiatives to preserve cultural heritage. Continuous monitoring and long-term studies are crucial to ensure the well-being of resettled populations and to balance development with environmental and social sustainability. In summary, this study advances the understanding of environmental impact assessment in reservoir projects, providing valuable insights for stakeholders and policymakers. It highlights the critical need for sustainable development practices that ensure equitable access to water resources while preserving environmental integrity.
Keywords: Environmental Impact Assessment, Reservoir Project, Machine Learning, Random Forest, Markov Chain, Cellular Automaton, Land Use Changes, Submergence Area, Sustainable Development Goals , Water Resource Management.
“PWAGIS QGIS Plugins Development : Lessons learn to Free Open Source Solutions for Geospatial”
Prasong Patheepphoemphong, Pongsakorn Udombua;
General Track
This talk will explore the transition from proprietary nature to PWAGIS QGIS plugins, focusing on the limitations experienced with previous solutions of proprietary software, the challenges encountered during development, and the innovative functionalities introduced in the new plugins. Attendees will gain insights into the practical aspects of moving from a legacy system to an open-source solution, highlighting both the obstacles and the opportunities this transition presents.
“Python-Based Open-Source Tools for Enhanced and Automated Remote Sensing and GIS Applications using Deep Learning”
Bharath Haridas Aithal;
Academic Track (Oral)
Accurate, precise, and swiftly processed data of Earth's many features is vital to the field of remote sensing. When trying to glean useful information from imagery from satellites and UAVs, researchers encounter a number of obstacles. The extraction of features from remotely sensed images is becoming increasingly important in many domains, such as land-use planning, transportation infrastructure development, environmental monitoring, etc., and image analysis systems based on deep learning are becoming indispensable for this task. Due to the availability of source code, its adaptability, and its collaborative nature, open-source software is widely used in analysing such features. Because these tools are easy to modify, open-source software developers can automate a wide range of deep learning-based applications. For instance, that they can create graphical user interfaces (GUIs) for tasks like automatically identifying and segmenting building rooftops, classifying images, processing spatial data, creating vector shapefiles, calculating rooftop solar potential, etc. The transparency and repeatability of open-source software make it a favourite among scientific researchers. Because of Python's accessibility and ease of use, libraries built using the language are gaining traction in the open-source software development space. Libraries having strong geospatial data handling and processing capabilities, such as GDAL, RasterIO, shapely, etc., contribute to the advancement of GIS and remote sensing. Popular deep-learning and machine-learning packages like Scikit-learn, TensorFlow, and PyTorch have simplified the application of numerous algorithms to tasks like feature extraction and identification from remotely sensed images. In a nutshell numerous scientific studies in the domain of remote sensing applications are increasingly reliant on open-source geospatial data processing tools that use deep learning/machine learning techniques.
The purpose of this study is to showcase the significance of open-source python libraries in achieving detection, classification, and building rooftops from aerial imagery. Following detection, automatic georeferencing by transferring the geo-spatial information from the input images to the detected and classified building rooftops is also done by using python libraries. Additionally, these libraries will facilitate the calculation of solar potential of the identified rooftops. A graphical user interface will be developed to automate these processes for usable and efficient solutions to planners and researchers.
The methodology of this study encompasses key steps utilizing various python libraries to achieve the tasks. Initially PyTorch and Ultrlytics were employed to leverage the YOLOV8 algorithm for accurate detection, segmentation and classification of the building rooftops from remotely sensed imagery. The YOLOV8 algorithm SpaceNet-3 Vegas and the aerial image dataset is used for training. After completion of the detection, segmentation and classification tasks, the results are extracted and visualized using OpenCV-Python, Numpy and Pandas. For the extraction and transfer of geospatial information, RasterIO and GDAL were utilized to ensure the precise georeferencing. The georeferenced segmentations are plotted using GDAL and OpenCV-Python. Finally, to generate the vector shapefiles of the segmented and classified rooftops, Shapely, OpenCV-Python, and Geopandas were used. This combination of libraries shows an efficient workflow for rooftop detection, providing a robust framework for further geospatial studies. Calculation of rooftop solar potential essentially requires hillshade map which is derived from Digital Elevation Model (DEM) and slop-aspect map. NumPy, RasterIO, and GDAL is used to generate the slope-aspect map, which generates information about the orientation and steepness of the rooftops. The hillshade map responsible for simulating the effects of sunlight on the terrain is generated using, EarthPy, RasterIO, and Numpy. To streamline and automate the entire process a graphical user interface is also developed with the help of PyQt (Figure 1). This methodology, powered by these python libraries, ensures an accurate and efficient workflow for rooftop detection, geospatial analysis etc.
“Raster Analysis with ease using Uber H3 Indexes and PostgreSQL”
Aadesh Baral, Kshitij Raj Sharma;
General Track
In this talk, we will explore how we can use Uber H3 indexes for raster analysis. We'll discuss performance considerations, limitations, advantages, and how H3 tiles with raster cell values can be presented on maps. We'll also cover Raster-to-H3 conversion strategies.
Our toolkit will include rasterio, gdal, Cloud Optimized GeoTIFFs (COGs),PostgreSQL and their integration with H3 Python and h3ronpy libraries. We'll demonstrate the analysis process using vector datasets from OpenStreetMap and a couple of openly available Cloud Optimized GeoTIFFs.
This talk is aimed at GIS professionals, data scientists, and developers interested in advanced geospatial analysis techniques. Lets explore how H3 indexing can be applied to solve complex spatial problems at scale.
“Real-Time Monitoring and Positioning of Agricultural Tractors Using a Low-Cost GNSS and IoT Device”
Thanwamas Phasinam;
Academic Track (Oral)
This research aims to develop a low-cost GNSS receiver device for positioning agricultural tractors, incorporating Differential GPS (DGPS) technology for enhanced accuracy using free and open source software. Integrated with IoT technology, the device was tested to receive GNSS data and other relevant information, including geographic coordinates (latitude and longitude), tractor speed, tractor direction, date, time, and the number of satellites receiving signals. The DGPS setup involves using one receiver as a base station and another on the tractor, where the base station provides correction data to improve positioning accuracy. The data collected by the receiver is transmitted to a signal processing device for mapping the coordinates, creating a route of the tractor's movement that is displayed on a real-time Web Map Application. This process includes error correction to ensure high accuracy. The IoT device was installed on the left rear wheel of the agricultural tractor. Test results show that the data from the developed device has an average accuracy of 22 centimeters, which is acceptable and sufficient for agricultural tractor positioning applications. Furthermore, this system enables real-time monitoring of the tractor's operations.
“Regional Land Cover Monitoring System: A Modular Land Cover System for Environmental Monitoring and Sustainability Applications”
Akkarapon Chaiyana;
Academic Track (Oral)
The Mekong region, comprising Cambodia, Laos PDR, Myanmar, Thailand, and Vietnam, is essential for agriculture and aquaculture, producing rice, crops, cassava, maize, sugarcane, and fish, thereby contributing to global food security. Additionally, this region acts as a significant carbon sink, absorbing greenhouse gases to mitigate climate change, regulate surface temperatures, and sustain ecosystems and biodiversity. However, factors such as rapid urbanization, severe floods and droughts, economic trade-offs, and rising sea levels are altering land use and land cover (LULC) patterns.
Mitigation and adaptation strategies are crucial for informed decision-making and policy development. A sustainable approach begins with the development of LULC maps, especially through over 20 years of monitoring for visual interpretation. Each country within the region has its policies addressing priority issues. To support the Mekong region, the Regional Land Cover Monitoring System (RLCMS) has been operational from 2000 to 2023. Modern technologies such as remote sensing, artificial intelligence, cloud computing, machine learning, deep learning, and Google Earth Engine (GEE) facilitate pixel-level to global-level analysis.
This study aims to map long-term LULC changes in the Mekong region using Landsat imagery from 2000 to 2023. Due to the region's tropical monsoon climate and prevalent cloud cover, the LandTrendr Optimization Algorithm (LTOP) was employed to minimize errors through time series interpolation, filling gaps caused by cloud obscuration. Nineteen LULC types were defined based on end-user objectives and land cover typologies from various organizations, including aquaculture, barren, cropland, crop plantation, deciduous forest, evergreen forest, flooded forest, forest plantation, grassland, mangrove, other forest, palm, rice, rubber, shrub, urban, water, wetland, and snow.
The reference data included a combination of field observations and high-resolution imagery from sources such as PlanetScope and time series data, amounting to over 300,000 data points. This reference data was collated from various collaborators, including national partners and organizations such as the Land Development Department (LDD) in Thailand, the Global Forest Resources Assessment (GFRA), the Food and Agriculture Organization (FAO), the Forest Department of Myanmar, the Space Technology Institute in Vietnam, the Wildlife Conservation Society (WCS) in Cambodia, and the Forest Inventory Planning Division of Laos PDR. These data were photo-interpreted and labeled using very high-resolution (VHR) imagery in the Collect Earth desktop application.
Machine learning (ML) and deep learning (DL) techniques were used to process land use probabilities and generate a primitive map. The study employed a random forest (RF) model to map evergreen, deciduous, and flooded forests based on criteria of large area, similar texture, and color, while the remaining primitive maps were refined using the EfficientNetV2 model. A hierarchical rule from Decision Tree (DT) was then applied to the assemblage structure using Monte Carlo simulation methods, incorporating additional criteria from the Land Cover Classification System (LCCS) by adding Tree Canopy Cover (TCC) and Tree Canopy Height (TCH) from Global Land Analysis and Discovery (GLAD) laboratory to reduce forest mapping errors. The logical transition approach was used to verify each pixel as post-process the data, ensuring robust RLCMS type results. Validation of the RLCMS map yielded an overall accuracy of 0.72 and a kappa statistic of 0.70.
In conclusion, the RLCMS developed through this study provides a reliable tool for monitoring long-term land use and land cover changes in the Mekong region, thereby supporting informed decision-making and policy development to address environmental and socio-economic challenges. The integration of advanced technologies such as remote sensing, machine learning, and cloud computing ensures high precision and efficiency in data analysis. Additionally, this system is universally applicable, as it utilizes publicly accessible global data (Landsat) and features an adaptable architecture that allows for customizable assembly logic to map various land cover typologies according to specific landscape monitoring objectives worldwide.
“Research on Sustainable Agricultural Management Using Agricultural Water Circulation Measurement Data”
JiHyeon Lee, SuhyeonKim, Mijin Lee;
Poster Presentations
The amount of agricultural water used from water resources in South Korea is about 42%, yet quantitative data are insufficient despite this high usage. When calculating the supply amount based on the reservoir level, there is high uncertainty due to the difference between the actual and designed effective storage.
The purpose of this study is to develop technology that enables more efficient and sustainable agricultural management by utilizing measured data such as rainfall and reservoir discharge to present optimal water management plans through various scenario analyses based on the data. By applying digital twin technology for 3D visualization, the circulation process of agricultural water can be analyzed more effectively.
“Rust for Geospatial Data Processing: A Case Study with CityGML Converter for PLATEAU, Japan's Open Digital Twin Models”
Sorami Hisamoto;
General Track
Rust is an open-source programming language renowned for its performance, reliability, and productivity. This case study focuses on our experience developing an official CityGML converter for PLATEAU, a project led by Japan's Ministry of Land, Infrastructure, Transport, and Tourism to model and utilize digital twin open data. The tool is publicly available as open-source software: https://github.com/Project-PLATEAU/PLATEAU-GIS-Converter
With this tool, you can convert the original CityGML data to arbitrary formats such as 3D Tiles, Mapbox Vector Tiles (MVT), GeoPackage, GeoJSON, KML, CZML, and even Shapefile.
Our decision to use Rust was driven by its efficiency and robust features, making it an ideal choice for handling complex low-level data processing tasks. Additionally, we adopted Tauri, a Rust-based open-source toolkit that enables the creation of cross-platform desktop applications using web frontend technologies.
In this talk, we will explore the reasoning behind our choice of Rust, the challenges we encountered during the development process, and the benefits we gained by leveraging this technology stack.
“Scrollytelling the 53 Stations of Tōkaidō: An Interactive Journey Through Japan’s Historic Route”
Sorami Hisamoto;
General Track
The 53 Stations of the Tōkaidō (東海道五十三次, Tōkaidō Gojūsan-tsugi) are iconic rest areas along the historic coastal road that once connected modern-day Tokyo to Kyoto. This route is renowned for its Ukiyo-e (浮世絵) prints, a distinctive form of Japanese painting that flourished during the Edo period from the 17th to the 19th centuries.
To bring this historic route to life, we have developed a web-based “scrollytelling” experience (a combination of “scrolling” and “storytelling”) that invites users to interactively traverse this historic route via a dynamic map. You can explore it yourself at https://sorami.dev/tokaido-scrollytelling/ !
This project harnesses Mapbox GL JS and a variety of open-source technologies, including Turf.js, Svelte, UnoCSS, and Scrollama. These tools, combined with open data for the route, stations, and accompanying artworks, enable us to offer a rich and engaging experience. All data and code are available at https://github.com/sorami/tokaido-scrollytelling.
In this talk, we will explore the potential and challenges of scrollytelling with maps—a contemporary method of content presentation enabled by modern digital tools. We will discuss the strengths and limitations of this storytelling style, examine various technologies available for creating such experiences, and detail the development process behind our project.
“Sen2Extract: A Free Online Tool to Access Environmental Index Time Series from Sentinel 2”
George Ge;
General Track
Since the European Space Agency Copernicus’s launch of the Sentinel satellites in 2015, there has been a rising interest in adapting free and high-quality geospatial data to inform scientific research. Of particular interest are environmental indices from Sentinel 2 (e.g. NDVI, MNDWI) which can be utilized for analysis and modelling in various topics in health, agriculture, and biodiversity.
However, while the Sentinel 2 imagery are freely available, the process of acquiring, deriving, and extracting meaningful data is not very straightforward. Sen2Chain, a tool developed in Python to automate the acquisition of Sentinel 2 images and to calculate these indices, and Sen2Extract, a tool developed in R to interact with Sen2Chain from the web, were created to address this problem.
This talk explores how we built these tools and applied them to various projects around the world, and how you can potentially adapt them for your own projects.
“Server-Side and Client-Side Topology rule-based by python from Provincial Waterworks Authority”
PEERANAT PRASONGSUK, NATPAKAL MANEERAT;
General Track
Geospatial Topology, a fundamental concept in geographic information systems, focuses on the analysis and characterization of spatial relationships between geographic entities without alteration of their intrinsic properties. This presentation examines the implementation of Geospatial Topology rules utilizing Python programming language, facilitating execution in both client-side and server-side environments.
We present an empirical case study from the PWA GIS Department of Thailand, which employs a comprehensive set of over 30 topology rule-based validations to ensure data integrity and consistency across national cartographic operations. The research investigates two primary platforms: QGIS Desktop and Web Applications, both serving as client-side interfaces capable of executing topology scripts and generating inconsistency reports for subsequent rectification.
A critical distinction between QGIS Desktop and Web Application lies in their respective execution paradigms: QGIS Desktop operates within a local environment, while the Web Application leverages server-side processing capabilities.
This presentation will elucidate methodologies for developing custom topology rules and demonstrate techniques for accessing single-algorithm Python scripts across both Desktop and Web Application environments. By bridging the technological gap between these platforms, we aim to enhance the efficacy of geospatial data quality control processes and optimize GIS workflows.
The findings of this research contribute to the broader understanding of cross-platform geospatial topology implementation and offer practical insights for GIS professionals and developers seeking to improve data validation processes in diverse computational environments.
“SERVIR Southeast Asia Air Quality Explorer: A Tool Harnessing Satellite and Modeling Data for Pollutant Monitoring in the Region”
Thannarot Kunlamai;
General Track
Air pollution in Southeast Asia has reached critical levels, significantly impacting human health across the region. Nearly the entire population lives in areas where air pollution exceeds the World Health Organization’s (WHO) safe air standards. This severe pollution is primarily due to rapid industrialization, urbanization, and deforestation, which have increased the amount of harmful pollutants in the air, particularly fine particulate matter known as PM2.5. Seasonal agricultural burning, a common practice in the region, also contributes significantly by releasing large quantities of smoke and particulate matter into the air. The rapid pace of urbanization has led to increased vehicle emissions and construction activities, further degrading air quality.
To address this issue, SERVIR Southeast Asia (SEA), a joint initiative of the U.S. Agency for International Development, the National Aeronautics and Space Administration (NASA), and the Asian Disaster Preparedness Center (ADPC) — its implementing partner, has developed the "SERVIR Southeast Asia Air Quality Explorer" to monitor air pollution and health impacts using satellite data and atmospheric modeling. The application uses advanced data visualization techniques to present complex datasets in an accessible manner. By harnessing the power of satellite data and predictive models, we hope that the SERVIR Southeast Asia Air Quality Explorer (SEA AQE) serves as a valuable resource for policymakers, researchers, and the general public, empowering them to make informed decisions to mitigate the adverse effects of air pollution.
The Air Quality Explorer features a user-friendly interface accessible on both desktop and mobile devices, allowing users to monitor real-time air pollution levels, including three-day forecasts of PM2.5 with a 5 km resolution and NO2 from Geostationary Environment Monitoring Spectrometer (GEMS). The application also features a fire hotspot map, helping users anticipate changes in air quality. It ranks cities over the Southeast Asia regions based on their PM2.5 levels and integrates PM2.5 data with a health index to translate the data into actionable health recommendations. Additionally, the tool includes six-hourly forecast wind data from NOAA and ground station data from Thailand’s Pollution Control Department (PCD), offering a comprehensive view of air quality dynamics across the region.
This project highlights the potential of combining satellite technology and forecast modeling with web-based platforms to improve environmental monitoring and decision-making in Southeast Asia. SERVIR SEA and collaborators will continue to enhance this tool with new useful data, such as high resolution of forecast PM2.5, fire risk, and fire emission inventory products, enabling users to link and analyze these with air pollution indicators. Additionally, the power of large language models (LLMs) will be applied to this tool, allowing users to input queries in natural language. This feature will translate user input into data retrieval commands, providing users with the desired results and making the tool even more accessible and user-friendly. Furthermore, we will develop “SERVIR SEA AQ API” service to provide air pollution satellite image data and json format data for integration on other platforms.
“SERVIR Southeast Asia Biophysical M&E Dashboard: A Tool to Support Landscape Monitoring”
MD KAMAL HOSEN;
General Track
Environmental degradation in Southeast Asia, particularly in Cambodia, is an alarming issue that poses significant threats to both ecosystems and human well-being. The region is experiencing rapid deforestation driven by agricultural expansion and urban development, leading to substantial loss of forest cover. This deforestation disrupts ecological balance and biological environments, contributing to habitat fragmentation, biodiversity loss, and alterations in local microclimates. Concurrently, changes in land use and land cover exacerbate these problems, further fragmenting habitats and impacting species distribution. The increasing risk of forest fires, fueled by climate variability, land use changes, and agricultural burning, adds another layer of concern, contributing to air pollution and posing risks to both natural and human systems.
In response to these growing challenges, a comprehensive monitoring tool is essential for systematically observing and analyzing environmental parameters. Such a tool would provide critical data necessary to address these issues, inform policy decisions, and support sustainable land management practices. Recognizing the need for a robust solution, SERVIR Southeast Asia (SEA)—a collaborative initiative of the U.S. Agency for International Development (USAID), the National Aeronautics and Space Administration (NASA), and the Asian Disaster Preparedness Center (ADPC)—has developed the “Biophysical Monitoring and Evaluation (M&E) Dashboard” for Cambodia. This open-access tool is designed to offer comprehensive, near-real-time insights into critical environmental parameters, leveraging advanced technologies to support environmental protection and sustainable management.
The Biophysical M&E Dashboard utilizes a range of cutting-edge technologies to analyze and visualize environmental data. It harnesses the power of Google Earth Engine (GEE), a cloud computing platform capable of processing vast amounts of satellite imagery. GEE is employed to analyze large-scale, open satellite data to map key indicators such as the Enhanced Vegetation Index (EVI), land use and land cover, forest cover, forest fire occurrences, and crop monitoring. The tool integrates these insights with data from GeoServer, an open-source geospatial data publisher, and PostgreSQL, a powerful open-source relational database system. Modern web technologies, including React with NextJS and the Python-based Django framework, are used to develop the user interface and ensure seamless functionality.
The M&E Dashboard is designed to integrate multiple data sources, providing a holistic view of landscape dynamics. It visualizes and analyzes critical aspects such as forest cover changes, land use transformations, vegetation health, and fire hotspots. By offering detailed analytical information at various levels—country, province, district, and designated protected areas—the tool enables users to gain a nuanced understanding of environmental trends. The dashboard’s current capabilities include monitoring forest gain and loss, assessing rice cropping fields, and tracking deforestation and fire hotspots. Future plans include expanding its functionality to support user-defined area levels and the incorporation of socio-economic and vulnerability indicators, further enhancing its adaptability and utility.
In addition to land use and land cover monitoring, the M&E Dashboard incorporates weather and climate information to support sustainable agriculture practices. This integration provides valuable insights into the impacts of climatic conditions on crop health and agricultural productivity, and also a drought assessment framework, aiding in the development of strategies to mitigate adverse effects and enhance resilience.
This paper presents a detailed overview of the architecture, design, and functionality of the Biophysical M&E Dashboard. It outlines how the tool addresses critical environmental challenges in Cambodia, including deforestation, habitat fragmentation, and fire risks. By offering a comprehensive suite of analytical features and visualizations, the dashboard supports informed decision-making and strategic planning for sustainable landscape management. Through its integration of advanced technologies and multi-source data, the Biophysical M&E Dashboard stands as a vital resource for protecting Cambodia’s natural resources and promoting ecological resilience in the face of ongoing environmental pressures.
“Social Media Data analysis in a Restaurant Context : A Case Study of TikTok”
Asamaporn Sitthi;
General Track
This study explores the integration of Natural Language Processing (NLP) and Geographic Information Systems (GIS) to analyze the spatial distribution and sentiment of restaurants based on TikTok data. Data was collected from TikTok using primary and secondary hashtags related to restaurant reviews in Bangkok. The resulting database enabled a detailed analysis of restaurant locations and customer sentiment using Logistic Regression for sentiment analysis. The findings indicate that negative reviews were predicted with the highest accuracy (84%), followed by positive (78%) and neutral (76%) reviews. The spatial analysis identified a dense of restaurants in the inner districts of Bangkok. This integration of NLP and GIS not only mapped the popularity of restaurants as mentioned on TikTok but also provided significant insights into consumer behavior and preferences. The study demonstrates the effectiveness of combining NLP and GIS for geospatial analysis, offering a powerful tool for understanding social media trends and their impact on local businesses. The results underscore the potential for leveraging social media data to inform urban planning and business strategies, particularly in the context of food and hospitality industries.
“Spatio-Temporal Analysis of Land Use Changes in Dakshina Kannada using Satellite Image Processing”
Vinay S;
Poster Presentations
In the Anthropocene, human actions motivated by society’s demands have caused significant changes in land use and land cover patterns on a large scale. The biased trade-offs between resource demand and availability has led to large scale modifications degrading landforms from regional to global context. This study analyzes land cover changes in Dakshina Kannada, India, between 2003 and 2024 using geospatial tools, satellite images, well proven classification algorithms with an accuracy over 80%. Images were enhanced using histogram equalisation method to enhance the visibility for better interpretation of land surface features. The analysis reveals significant transformations with potential environmental and socio-economic implications. Forest cover declined by 10.2 percentage points (42.0% to 31.8%), suggesting threats like deforestation and land conversion. Conversely, plantations exhibited a substantial increase of 17.3 percentage points (31.1% to 48.4%). Agricultural lands witnessed a dramatic decline of 14.4 percentage points (17.5% to 3.1%), necessitating further investigation into the driving factors. Notably, built-up areas tripled from 3.2% to 9.8%, signifying rapid urbanization. These findings suggest a prioritization of urbanization and plantation expansion, potentially at the expense of environmental sustainability and traditional agriculture. The study emphasizes the need for sustainable development strategies that balance economic growth with environmental protection and support for rural communities
“Spatio-Temporal Drought Monitoring in the Chi River Basin from 2001–2020 Using MODIS Time Series and Google Earth Engine”
Jaturong Som-ard;
Academic Track (Oral)
Drought is a recurring issue in South Asia (SA) caused by extreme climate events, posing ongoing challenges for food management, sustainable agricultural practices, and livelihoods, especially in frequently affected areas. Earth Observation (EO) data provides valuable information for long-term drought monitoring across wide regions. However, there remains a need to map and monitor spatial drought events over large regions and extended periods, particularly in the Chi River Basin. In this region, droughts have increasingly occurred with high frequency, leading to low water-holding capacity and adversely affecting agricultural production and productivity.
In this context, this study aimed: i) to identify spatial drought from 2001 to 2020 over the Chi River Basin, Thailand using MODIS image time series via Google Earth Engine (GEE); ii) analyse the correlation between Temperature Vegetation Dryness Index (TVDI), Standardized Precipitation Index (SPI), and Streamflow Drought Index (SDI); and compare severe drought areas to land use maps provided by the Land Development Department (LDD).
In this study, we collected MODIS data using the MYD09Q1 (250m spatial resolution) and MOD11A1 products (1000m pixel size) across the h27v07 and h28v07 tiles. Image pre-processing was implemented, consisting of image resampling, data compositing, and image mosaicking through the GEE platform. Subsequently, the TVDI index was generated for both dry and wet seasons to map and monitor the spatial distribution of drought events over 21 years. The spatial drought based on TVDI and meteorological datasets (SPI and SDI) was determined to identify their relationship. Additionally, this study compared the spatiotemporal distribution of drought to land use groups.
Historical droughts were most frequent during the dry seasons of 2005 (82%), 2013 (80%), and 2004 (78%), and appeared in the wet seasons of 2019 (41%), 2017 (41%), and 2009 (38%). The TVDI drought map had a slightly low coefficient of determination (R²) for SPI and SDI, ranging from 0.12 to 0.22. However, these findings showed similar trends of drought across all study years, with drought events predominantly occurring in the central and northeast parts of the region. In comparison, the spatial drought map in 2021 showed severe droughts, mostly impacting cassava and rice fields during the dry season and urban areas during the wet season. Our proposed workflow is reliable and robust, providing spatial drought areas with confidence in the accuracy and validity of our results.
This study produced spatial drought maps using MODIS image time series datasets. The mapped results were well smoothed and effectively distributed drought areas across large regions. The highly severe spatial droughts in 2005 align with Thailand's extreme drought due to the El Niño event, demonstrating high severity compared to other years. This confirms that the TVDI index provides excellent and efficient map results for mapping the spatial distribution of droughts in cloudy regions and complex landscapes of the Chi River Basin. The proposed workflow can generate drought maps in cloudy regions and complex landscapes over large or national regions, particularly in zones like Thailand. Our findings can be used to manage future droughts and serve as a significant tool for drought mitigation planning and management, as well as for warning systems, providing an integrated model under climate change conditions.
Keywords: Drought; earth observation; Temperature Vegetation Dryness Index (TVDI); Land use; Google Earth Engine
“STAC: Driving Innovation in Geospatial Applications”
Siriya Saenkhom-or;
General Track
The SpatioTemporal Asset Catalog (STAC) revolutionizes geospatial applications by providing a standardized framework for cataloging spatiotemporal data. Developed in 2017 through a collaborative effort among various organizations, STAC streamlines the discovery and retrieval of geospatial assets, making it easier for users to access satellite imagery and other spatial data. This open-source specification, which aligns with FAIR principles—Findable, Accessible, Interoperable, and Reusable—promotes interoperability among various data providers and applications, fostering innovation in the geospatial community.
STAC's design allows for automated data retrieval through the STAC API, making it especially useful for applications in environmental monitoring, disaster management, and urban planning. Its JSON-based structure enhances user accessibility, allowing developers to quickly integrate geospatial data into their workflows. Furthermore, STAC's extensibility ensures it can adapt to a wide range of geospatial data types, from remote sensing to 3D point clouds.
The benefits of STAC go beyond theoretical applications. In Thailand, STAC is applied to the GISTDA Decision Support System for Disaster Management Platform. On this platform, STAC catalogs vector data related to flooding areas, thermal activities, and drought indices. As a result, the implemented application can efficiently browse and retrieve data from the STAC catalog, enhancing data retrieval speed and user experience.
As the geospatial landscape continues to evolve, STAC stands out as a remarkable tool for driving innovation, enabling seamless data sharing, and empowering users to harness the full potential of geospatial technologies in addressing complex global challenges.
“State of mago3DTiler, an Open Source Based OGC 3D Tiles Creator”
Sanghee Shin;
General Track
In this session, I will introduce mago3DTiler (https://github.com/Gaia3D/mago-3d-tiler), an open-source OGC 3D Tiles creator that has gained global popularity thanks to its robust features, high performance, and user-friendly interface. Initially unveiled at FOSS4G-Asia 2023 in Seoul, mago3DTiler supports over ten different 3D data formats, including 3DS, OBJ, FBX, glTF, Collada DAE, BIM (IFC), LAS, LAZ, and SHP. One of its standout features is on-the-fly Coordinate Reference System (CRS) conversion during the 3D Tiles creation process. Additionally, it allows users to convert 2D data with height attributes into extruded 3D Tiles.
During this session, I will also demonstrate how to create a digital twin using mago3DTiler in just a few minutes. This tool makes complex geospatial tasks more manageable, especially for users looking to integrate diverse data formats seamlessly into 3D projects.
“The Application of Google Earth Engine for PM2.5 Estimation to Analyze PM2.5 Dispersion Form in Saraburi Province”
Pattara;
Academic Track (Oral)
This research aims to estimate PM2.5 concentrations from Aerosol Optical Depth (AOD) and meteorological data and study the spatial distribution patterns of PM2.5 in Saraburi Province. The estimation of PM2.5 levels is conducted using AOD data combined with meteorological data through a Multiple Linear Regression (MLR) method. The estimated values are then used to analyze the distribution patterns of PM2.5. The study found that in 2018, the average monthly PM2.5 concentration ranged from 0 to 74.1 μg/m³, with high-value clustering (hot spots) covering approximately 421.43 km², or 12.04% of the provincial area. In 2019, the average monthly PM2.5 concentration ranged from 0 to 41.4 μg/m³, with hot spots covering approximately 509.29 km², or 14.55% of the provincial area. In 2020, the average monthly PM2.5 concentration ranged from 0 to 50.0 μg/m³, with hot spots covering approximately 648.37 km², or 18.53% of the provincial area. In 2021, the average monthly PM2.5 concentration ranged from 0 to 55.3 μg/m³, with hot spots covering approximately 562.93 km², or 16.09% of the provincial area. In 2022, the average monthly PM2.5 concentration ranged from 0 to 57.3 μg/m³, with hot spots covering approximately 615.97 km², or 18% of the provincial area. The most of high-value clusters were in the western part of the province, where agricultural activities are prevalent, contributing to higher PM2.5 levels. In contrast, low-value clusters (cold spots) were primarily found in the eastern part of the province, which is largely forested.
“The Challenges of Reproducibility for Research Based on Geodata Web Services”
Massimiliano Cannata;
Academic Track (Oral)
Modern research applies the Open Science approach that fosters the production and sharing of Open Data according to the FAIR (Findable, Accessible, Interoperable, Reusable) principles. In the geospatial context this is generally achieved through the setup of OGC Web services that implements open standards that satisfies the FAIR requirements. Nevertheless, the requirement of Findability is not fully satisfied by those services since there’s no use of persistent identifiers and no guarantee that the same dataset used for a study can be immutably accessed in a later period: a fact that hinders the replicability of research. This is particularly true in recent years where data-driven research and technological advances have boosted frequent updates of datasets. Here, we review needs and practices, supported by some real case examples, on frequent data or metadata updates in geo-datasets of different data types. Additionally we assess the currently available tools that support data versioning for databases, files and log-structured tables. Finally we discuss challenges and opportunities to enable geospatial web services that are fully FAIR: a fact that would provide, due to the massive use and increasing availability of geospatial data, a great push toward open science compliance with ultimately impacts on the science transparency and credibility.
“The Current State of Collaboration between Digital Twin and OSM in Japan: A Case Study of Project PLATEAU”
Taichi Furuhashi;
General Track
In recent years, 3D city models have become crucial for urban planning and research. Japan's Project PLATEAU has led the development of open 3D city models and point cloud data, with over 100 cities releasing Digital Twin data in CityGML format by February 2023. This talk explores the collaboration between Japan's Digital Twin initiatives and the global OpenStreetMap community. Since 2022, Japan's Digital Twin data, following the ODbL license, has been integrated with OpenStreetMap using specially developed tools. This integration aims to promote the global adoption of 3D city models, enhancing urban development through the synergy of Digital Twin technologies and OpenStreetMap.
“The MAGDA Project: Integration of GNSS, Sentinel, Meteodrone, and In-Situ Observations for Weather Warnings and Irrigation Advisories in Agriculture”
Eugenio Realini;
General Track
The Meteorological Assimilation from Galileo and Drones for Agriculture (MAGDA) project, funded by EUSPA in the framework of the Horizon Europe program, aims to develop a comprehensive toolchain for atmosphere monitoring, weather forecasting, and advisory services related to severe weather, irrigation, and crop monitoring. By integrating GNSS, Copernicus Sentinel, Meteodrone, ground-based weather radar, and in-situ weather and soil observations into open source weather and hydrological models, MAGDA seeks to provide valuable information to agricultural operators. Measured data, model results, and warnings/advisories are delivered to farmers through a dedicated dashboard or by interfacing with existing Farm Management Systems. The technical and methodological components developed within MAGDA will form the basis for services supporting agricultural operations.
The project is based on the concept that continuous monitoring, combined with advanced prediction models, is essential for effective resource management. As extreme weather events, such as droughts and heatwaves, become more frequent due to climate change, farmers need to leverage technology to mitigate disasters, conserve resources, and enhance productivity. A system that can automatically collect and process measurements of key parameters significantly reduces economic losses. When this data is presented clearly and usefully to end users, it can significantly enhance agricultural efficiency.
MAGDA unites seven partners from seven European countries (Austria, France, Italy, Romania, Spain, The Netherlands, and Switzerland) and is inherently interdisciplinary, drawing on expertise from various sectors to develop a system tailored to agricultural meteorological and hydrological forecasts.
The selected demonstrator areas in Italy (Cuneo), France (Burgundy), and Romania (Braila) target different crops/cultures and allow for the gathering of different user needs and feedback through direct interactions with farmers. Deployment includes nine low-cost, dual-frequency GNSS stations, along with fifteen low-cost in-situ sensor stations, and three Meteobases to fly meteodrones.
Severe weather cases were identified to test the open source WRF meteorological model's performance: in Italy, the focus was on rainfall events, while in France and Romania, hail events were prioritized. Water balance simulations were conducted to support an operational irrigation advisory service, using the open source SPHY hydrological model across the pilot areas in France, Italy, and Romania.
All data used in the MAGDA project are open for research applications, and the GNSS processing software utilized in this project leverages the goGPS open source software. The MAGDA dashboard for result visualization uses Leaflet as a web mapping tool and OpenStreetMap data as a background layer. The results presented here are derived from the currently ongoing MAGDA demonstrators, showcasing the project's impact on weather forecasting and water management for agricultural operations.
“The QGIS Shredder Plugin Inspired by Banksy’s Shredder: Sustainable shredder with no waste”
Naoya Nishibayashi;
General Track
In October 2018, the art world witnessed an unprecedented event when Banksy’s “Girl with Balloon” partially shredded itself immediately after being auctioned at Sotheby’s. This bold act challenged traditional perceptions of art, value, and the role of the artist. Drawing inspiration from this event, I developed a unique QGIS plugin that shreds layers.
This plugin shreds the input data into shredded pieces and is implemented with a simple pyQGIS API.
It can be shedded with either vector or raster layers, and the fineness of the shredding can be set.
This plugin may seem useless at first glance, but it can be used when you want to mess up your data, when you have created data that you cannot show to others, when you just want to relieve stress, or when you want to feel like Banksy.
Most importantly, while shredding physical documents produces waste, shredding data creates no physical waste, making it a truly sustainable practice.
“The relationship between PM2.5 and solar cell electricity generation Using Aerosol Optical Depth (AOD)”
Sunattha Lalaeng;
Poster Presentations
This study aims to analyze the relationship between PM2.5 concentrations, derived from Aerosol Optical Depth (AOD), and solar power generation in the study area, which is a solar farm owned by Tia Nguan Spinning Co., Ltd., in Samut Prakan Province, Thailand, in 2022. The research utilizes PM2.5 data from pollution monitoring stations of the Pollution Control Department, AOD data from the MCD19A2.061 product, and solar power generation data from the Electricity Generating Authority of Thailand. The results indicate a negative correlation between PM2.5 concentrations and solar power generation during the summer season R² = -0.7, meaning that as PM2.5 levels increase, solar power generation decreases. A regression equation used for power prediction achieved an accuracy of R² = 0.97. In contrast, a positive correlation is observed during the winter season R² = 0.6, indicating that as PM2.5 levels increase, solar power generation increases, with a prediction accuracy of R² = 0.93. No significant correlation is found during the rainy season, which may be due to other influencing factors. Predicting solar power generation in other areas should consider the different physical factors unique to each location.
Keywords: PM2.5, Aerosol Optical Depth, Solar Power, Solar Farm
“Unlocking the Potential of Earth Observation Data: Simplifying Analysis with Opendatacube”
Pratik;
General Track
Unlocking the potential of Earth Observation (EO) data requires innovative approaches to data management and processing. This talk introduces a cutting-edge solution that enhances the utility of OpenDataCube, providing researchers, scientists, and developers with a streamlined, accessible platform for EO data analysis.
Leveraging the versatility of OpenDataCube - an open-source platform specifically designed for EO data management. This presentation demonstrates how users can efficiently access, process, and analyze EO datasets with minimal coding. OpenDataCube's comprehensive features allow researchers to explore vast datasets, extract valuable insights, and perform complex analyses effortlessly, significantly simplifying workflows and boosting productivity.
Real-world use cases will be presented to showcase the practical applications of OpenDataCube across various domains, including environmental monitoring and urban planning. These examples will underscore the platform's effectiveness in addressing diverse challenges in EO data analysis. With its user-friendly interface and robust data management capabilities, OpenDataCube empowers researchers and scientists to fully harness EO data's potential, driving impactful discoveries.
This presentation is invaluable for professionals, researchers, and enthusiasts in the geospatial community, offering insights into how OpenDataCube can revolutionize EO data analysis and contribute to the advancement of open, accessible, and impactful geospatial solutions.
“Urban Climate Mastery: Harnessing DART for Sustainable Cities”
Salghuna N N, Jyothish Jayan;
Workshop Proposals
The phenomenon of Urban Heat Islands (UHI) describes areas within cities where temperatures are significantly higher than their rural surroundings. This difference arises due to the replacement of natural land covers with heat-absorbing materials like buildings and asphalt. Urban Heat Islands can exacerbate health issues related to heat, increase energy consumption for cooling, and alter urban ecosystems. Addressing UHI is crucial for creating sustainable and resilient urban environments.
The DART (Discrete Anisotropic Radiative Transfer) model is a sophisticated tool specifically designed to simulate and analyze microclimates within urban areas, focusing on UHI. It considers various factors influencing urban temperatures such as land use patterns, building characteristics, and meteorological conditions. By integrating local data on these factors, including solar radiation and air pollution levels, DART provides accurate predictions of air temperature variations at a fine spatial scale.
Key to its functionality is the high spatial resolution capability of DART, which enables detailed mapping of temperature variations across neighborhoods and even individual city blocks. The model operates on principles of surface energy balance, accounting for how different urban surfaces absorb, reflect, and emit solar energy. This comprehensive approach allows DART to simulate realistic urban conditions and generate precise temperature forecasts.
One of the significant advantages of the DART model is its tailored application to urban environments. Unlike broader climate models, DART focuses on microclimatic conditions specific to cities, making it invaluable for urban planning and design. Stakeholders can use DART for scenario analysis, evaluating the impacts of various urban development and mitigation strategies on UHI intensity. This capability supports decision-making processes aimed at optimizing urban designs to mitigate heat impacts, such as integrating green spaces, cool roofs, and reflective surfaces.
Practical applications of the DART model span diverse fields, including urban planning, public health, and energy efficiency. City planners utilize DART to inform decisions that enhance urban livability and resilience. For instance, in cities like Phoenix and Singapore, DART has been instrumental in studying temperature variations and evaluating the effectiveness of green infrastructure in reducing UHI intensity. Such research informs policies and initiatives aimed at mitigating heat stress and promoting sustainable urban development.
In conclusion, the DART model stands as a critical tool for addressing the challenges posed by Urban Heat Islands. Its ability to simulate detailed microclimatic conditions and predict temperature variations empowers stakeholders to implement informed strategies for creating more livable, energy-efficient, and resilient urban environments. As cities worldwide grapple with the impacts of climate change, the DART model remains indispensable for shaping urban landscapes that prioritize environmental sustainability and the well-being of urban residents.
Workshop Outline:
-
Introduction to Urban Heat Islands (UHI)
Definition and Causes of UHI:
Definition of Urban Heat Islands and their characteristics.
Causes including urbanization, heat retention in built environments, and reduced green spaces.
Importance of Studying and Mitigating UHI Effects:
Environmental, health, and socio-economic impacts of UHI.
Importance of mitigation strategies for urban sustainability and climate resilience. -
Overview of Environmental Modeling
Introduction to Different Types of Environmental Models:
Overview of predictive models used in environmental science and urban planning.
Role of models in simulating complex systems and predicting environmental impacts.
Role of Modeling in Understanding UHI Dynamics:
How modeling helps quantify UHI intensity, spatial distribution, and temporal variation.
Examples of modeling applications in UHI research and policy development. -
Introduction to the DART Model
Explanation of the DART Model Framework:
Overview of the District Air Temperature (DART) model structure and methodology.
Core principles behind DART’s approach to simulating urban microclimates.
Key Features and Components of the DART Model:
Components such as land use data integration, surface energy balance calculations, and meteorological data inputs.
Scalability and adaptability of DART for different urban settings and scales. -
Advantages of Using the DART Model
Accuracy and Precision in Spatial Temperature Predictions:
How DART achieves high-resolution temperature mapping compared to traditional models.
Case studies demonstrating DART’s predictive capabilities in diverse urban contexts.
Integration of Local Environmental Factors for Precise Modeling:
Utilizing local data on land use, vegetation cover, and building characteristics to enhance model accuracy.
Examples of incorporating local meteorological data for real-time simulations and long-term projections.
Flexibility in Adapting to Different Urban Contexts and Scales:
DART’s ability to scale from neighborhood-level studies to city-wide assessments.
Applications in assessing UHI impacts across different urban typologies (e.g., dense downtown areas vs. suburban neighborhoods). -
Comparison with Other Modeling Software
Contrasting DART with Traditional Climate Models (e.g., WRF, ENVI-met):
Differences in spatial resolution, computational requirements, and scope of application.
Unique advantages of DART in simulating urban microclimates and its niche in environmental modeling.
Case Studies Demonstrating Successful Applications of DART:
Examples where DART has provided insights not achievable with other models.
Comparisons showcasing DART’s effectiveness in supporting UHI mitigation strategies and urban planning decisions. -
Hands-on Session with DART
Practical Demonstration of Setting Up and Running the DART Model:
Step-by-step guidance on configuring DART software and initializing simulations.
Hands-on exercises using sample datasets or participant-provided data to run simulations.
Inputting Local Data and Interpreting Outputs:
Practical tips for preparing and inputting local data (e.g., GIS data, meteorological datasets) into the DART model.
Interpreting model outputs such as temperature maps, heat fluxes, and UHI intensity indices.
Guidance on Interpreting Model Results and Deriving Actionable Insights:
Techniques for analyzing and visualizing DART results to identify UHI hotspots and trends.
How to use model findings to inform urban planning strategies and prioritize mitigation measures. -
Application of DART in UHI Mitigation Strategies
Using DART Outputs to Inform Urban Planning and Design Decisions:
Examples of incorporating DART predictions into urban design guidelines and zoning regulations.
Case studies demonstrating effective use of DART in optimizing green infrastructure and cool roof initiatives.
Examples of Effective UHI Mitigation Strategies Based on DART Modeling Results:
Showcasing successful interventions such as green spaces creation, heat-reflective surfaces, and building design modifications.
How DART contributes to evaluating the effectiveness of implemented strategies and refining future interventions.
Integrating DART into Sustainability and Climate Resilience Frameworks:
Strategies for integrating DART outputs into broader sustainability agendas and climate action plans.
Collaborative approaches involving stakeholders, policymakers, and community engagement in UHI mitigation efforts.
Duration:
The workshop will span 4 hours, including breaks and interactive sessions.
Additional Information/Specifications:
Facilitators: The workshop will be led by experienced professionals with expertise in urban climate modeling and UHI mitigation strategies.
Language: The workshop will be conducted in English.
Certification: Participants will receive a certificate of completion.
This workshop is designed to equip participants with practical skills in using the DART model for UHI analysis and mitigation planning. By the end of the session, participants will have the knowledge and confidence to apply the DART model in their own contexts, contributing to sustainable urban development and climate resilience efforts.
“USING FOSS4G TOOLS WITH RDNDVI TECHNIQUES TO ANALYZE FLOOD HAZARD IN TROPICAL SE ASIA AREA AT WANG THONG RIVER BASIN, PHITSANULOKE, THAILAND.”
Kittituch Naksri | Chaiwiwat Vansarochana;
General Track
This study aims to find the results of using a free disaster mapping application developed on Google Earth, named “Hazmapper”. This tool allows users to create maps and GIS products from Sentinel or Landsat datasets without the high time and cost usually needed for traditional analysis.
The initial design of the HazMapper program used indicators based on the Normalized Difference Vegetation Index (NDVI). Specifically, it developed the relative difference NDVI (rdNDVI) to identify areas where vegetation was removed after natural disasters. Because these indicators rely on vegetation, HazMapper is unsuitable for desert or polar regions, which means appropriate for tropical areas.
Using the rdNDVI indicator for different years in the same area and comparing the average absolute error (MAE) of all results to test the effectiveness of the Hazmapper model in application to flooded areas.
“Using Opensource 3D geospatial In Large Scale Chemical Incident Assesment”
Hakjoon Kim;
Poster Presentations
We introduce a research case where a large-scale chemical accident that cannot be realized or tested in the offline real world was implemented and evaluated in a three-dimensional virtual space.
“Vector tiles cartography for Asia”
Nicolas Bozon;
General Track
Vector tiles are changing the way we create maps. Client-side rendering offers endless possibilities to the cartographer and has introduced new map design tools and techniques. Let’s explore an innovative approach to modern cartography based on simplicity and a comprehensive vector tiles schema. Take a visual tour of vector tiles cartography, and learn how to possibly adapt the map design for an asian audience.
“Visualizing and Managing Smart Grids with Geospatial Big Data: The SEMS Approach”
Venkata Satya Rama Rao Bandreddi;
General Track
Geospatial big data plays a pivotal role in the context of smart grids, revolutionizing the way modern electrical grids are monitored, managed, and optimized. Smart grids integrate advanced sensing, communication, and control technologies to enhance the efficiency, reliability, and sustainability of electricity distribution. While locational information of the smart meters is pivotal, the consumption patterns combined with other information like consumer type, land use and local weather conditions can really enhance the assessment of energy requirements and usage thus leading to a Spatial Energy Management System (SEMS). This will significantly enrich location-aware decision-making, real-time monitoring, and predictive analysis, thus improving energy resource optimisation and achieving the goal of SDG-7. The SEMS web application represents a critical advancement, facilitating dynamic visualization of temporal clusters, across energy sources and their evolution over time. By overlaying clusters across different time periods, utility personnel gain insights into customer categorizations vs actual utilization, enabling understanding of demand fluctuations, outages and faults, fault localization within the electric network and demand-generation assessment of renewable energy.
The primary objective is the development of a dynamic web-based SEMS application capable of visualizing temporal changes and consumption patterns, while also providing alerts to facilitate proactive management.
The initial phase of the methodology focuses on the Jeedimetla region in Hyderabad, leveraging real-world data encompassing 6,000 households categorized into residential, commercial, and industrial segments. Quantum Geographic Information System (QGIS) software is employed to establish the electric network. To store the spatial and temporal consumption data, the data storage infrastructure employs PostgreSQL, enhanced with the PostGIS extension. This combination is selected due to its robust capabilities in accommodating and managing complex datasets. Additionally, a comprehensive and refined data model has been established to serve as the framework for storing Advanced Metering Infrastructure (AMI) data. Spatial data retrieved from the PostgreSQL database is exposed through a Web GIS Server (GeoServer) as a Web Feature Service, facilitating its integration into applications. This spatial information is utilized within the OpenLayers Software Development Kit (SDK) to visually render data within a JavaScript-based application environment. Non-spatial data is accessed through Application Programming Interfaces (APIs) integrated within a Node server, operating as REST endpoints. Concurrently, the React JavaScript library is employed to present this non-spatial information to end-users in an interactive format.
The SEMS Geospatial Visualization Engine comprises three key components. The first component features two primary views: the Basic View, which presents hourly usage consumption data, and the Combined View, integrating map and graph interfaces. This combined view allows users to select specific time periods, customer locations, transformers, or feeders, with the graph view displaying corresponding hourly consumption data. Both views facilitate the visualization of consumption patterns for customer classes or types, which are pre-defined in the system.
The second component of the SEMS Visualization Engine aggregates data at the daily level, classifying users into low, medium, and high consumption categories. These classifications are visually represented on the map view, providing insights into consumption trends across the study area.
The third component integrates weather data sourced from an open-source Weather API with customer energy consumption data. Weather information is stored in the PostgreSQL database, and the combined view enables the graphical representation of weather data changes alongside energy consumption data. This integration enhances understanding of how weather fluctuations impact customer energy usage patterns.
By leveraging advanced sensing, communication, and control technologies, three components of SEMS Geospatial Visualization engine framework provides a comprehensive platform for understanding and visualizing energy consumption and demand. The spatial view aids in developing spatial clusters, which assist electric utility personnel in comprehending consumption patterns and energy demand requirements for specific areas, facilitating efficient management of power shortage scenarios.
The SEMS visualization engine supports various utility use cases, including energy loss identification, detection of energy overuse, and dynamic reclassification of users based on current consumption data. As a prospective avenue for future research, integrating this Geospatial Visualization Engine framework with machine learning-based prediction models holds promise for forecasting energy consumption and dynamically classifying users. Such advancements are anticipated to further empower utility personnel in decision-making, real-time monitoring, and predictive analytics within smart grid environments.
“YIELD ESTIMATION OF RICE USING MULTISPECTRAL IMAGERY FROM UAV IN NEPAL”
Sudipta Poudel;
General Track
Agriculture plays a vital role in sustaining human life and providing food security for the global population. Rice cultivation plays an integral role in global food security, serving as a staple food for a substantial portion of the world’s population. To meet the increasing demand for food, crop yields need to be optimized. In Nepal, it is estimated that the agriculture sector engages around 66% of the total population (Gauchan & Shrestha, 2017). It contributes one-third of the nation's GDP with a significant contribution to the national economy (MN Paudel, 2016). In Nepal, UAVs, commonly known as drones, have gained popularity in agriculture due to their ability to collect high-resolution, real-time data over large agricultural areas. Multispectral drones are equipped with specialized sensors that can capture information beyond the visible spectrum, such as near-infrared and thermal data. This data provides valuable insights into crop health, stress, and growth patterns, which are essential for optimizing crop management practices. Monitoring crop health is critical for the early detection of diseases, pests, nutrient deficiencies, and other stress factors that can affect yield and quality. Multispectral drones can capture spectral data that reveal variations in plant reflectance, chlorophyll content, and overall vigor. By analyzing these data, farmers can make informed decisions regarding irrigation, fertilizer dose estimation, and pest control, leading to increased crop productivity and reduced input costs. The primary focus of this report is to harness the capabilities of multispectral imagery obtained through UAVs to accurately estimate rice yield. By using remote sensing data and sophisticated analysis techniques, this project aims to develop a robust methodology capable of predicting rice yield with a higher degree of accuracy.
The article presented aims to estimate rice yield using regression models based on plant characteristics, including plant height, plant age, farm management data such as amount of DAP, zinc, Urea, Potash used and vegetation indices derived from unmanned aerial vehicle (UAV) data. The specific objectives are to analyze the correlation between indices, develop linear and multilinear relationships between yield and plant characteristics, and perform separate regression models based on the type of rice plant. The study was conducted in six different study areas during the month August and September. A total of 19 vegetation indices were calculated from multispectral and RGB imagery. Statistical measures such as mean, standard deviation, minimum, maximum, and sum were obtained from zonal statistics in ArcGIS software. Correlation analysis and regression models were developed using plant height, plant age, farm management and vegetation indices. The developed regression models showed good accuracy in estimating rice yield, with R-squared values of around 74% with a predicted R squared values of around 69%. The study demonstrates the potential of using UAV-derived vegetation indices and farm management data for yield estimation, which can be valuable for precision agriculture and crop monitoring.
“ZOO-Project - OGC API - Processes - Introduction”
Gérald Fenoy;
Workshop Proposals
The ZOO-Project will first be presented, along with details about the OGC API - Processes part 1: core. The participants will then learn how to set up the ZOO Kernel and to get an OGC API - Processes server running in a few simple steps. Some basic services will be presented to the attendees to give them the capability to reuse them later in their own application. Then, they will learn how to develop simple service using the Python language, through simple programming exercises. A ready to use client will be used to interact with the available OGC API - Processes services and the one to be developed. Participants will finally learn how to chain the existing services using the server-side Javascript ZOO-API.
“ZOO-Project: news about the Open Source Generic Processing Engine”
Gérald Fenoy;
General Track
The ZOO-Project is an open-source processing platform released under the MIT/X11 Licence. It provides the polyglot ZOO-Kernel, a server implementation of the Web Processing Service (WPS) (1.0.0 and 2.0.0), and the OGC API - Processes standards published by the OGC. It contains ZOO-Services, a minimal set of ready-to-use services that can be used as a base to create more useful services. It provides the ZOO-API, initially only available from the JavaScript service implementation, which exposes ZOO-Kernel variables and functions to the language used to implement the service. It contains the ZOO-Client, a JavaScript API that can be used from a client application to interact with a WPS server.