“Advanced Development of GIS-Based Databases and Visualization Technologies for Marine Environment Impact Assessment”
SuhyeonKim, KIM GUEN HA, JooYoung Park;
Poster Presentations
This study aims to effectively support marine environment impact assessments, which evaluate and manage the environmental impacts of marine use and development both before and after they occur, to reduce social conflicts and improve quality of life. We have established an integrated database by collecting and standardizing diagnostic, assessment, and predictive information, and developed an application for searching and visualizing this data. The system enables real-time monitoring of marine environmental changes through advanced analytics that visualize spatial patterns and time-series data, while GIS-based visualization tools help intuitively understand dynamic marine ecosystems. The updated database offers advanced analytical functions for precise detection and prediction of environmental changes, assisting officials, reviewers, and assessment agencies in making informed decisions.
Mapping technology was developed to visualize numerical data, such as water levels and flow rates, with direction and color. Pre-field evaluation technology provides regulatory zone information, and there are plans to expand the analytical scope to marine use zones. Data analysis technology supports quality inspection by comparing new observation and assessment data with existing data to detect anomalies. These technological results are set to be integrated into the Ministry of Oceans and Fisheries' marine environment impact assessment system.
“Advancing Hydrometeorological Data in Asia for Enhanced Water Resources and Climate Applications”
Natthachet Tangdamrongsub;
General Track
Hydrometeorological data are crucial for effective water resource management, weather forecasting, and climate adaptation. In Asia, a region known for its vast geographic diversity and varied climatic conditions, the lack of high-resolution data has been a significant challenge for addressing local-scale issues. Traditional datasets often have coarse spatial resolution (e.g., 10 – 25 km), limiting their usefulness for detailed, localized analysis. To address this gap, we have developed a pioneering dataset offering 1 km resolution hydrometeorological data for the entire Asian continent. This dataset includes essential variables such as precipitation, surface temperature, radiation, soil moisture, evapotranspiration, groundwater, and surface runoff delivering unprecedented detail and accuracy compared to existing coarse-resolution data. The dataset was created using advanced remote sensing techniques, land surface physics, and sophisticated data assimilation methods, ensuring both enhanced spatial resolution and accurate reflection of local conditions. Our dataset spans from 1940 to the present, providing a comprehensive historical archive and seasonal forecasts extending up to six months into the future. This combination of historical and predictive data makes it an invaluable resource for a variety of applications, including water resources, climate studies, agriculture, and disaster assessment. To validate the accuracy of our high-resolution data, we conducted extensive comparisons with satellite remote sensing products such as MODIS (Moderate Resolution Imaging Spectroradiometer), GRACE (Gravity Recovery and Climate Experiment), and SMAP (Soil Moisture Active Passive). These comparisons confirm that our dataset offers superior accuracy and finer detail compared to publicly available data.
“An Assessment of Rainfall Induced Slope Failure Morphology using Change Vectors and Random Forest Model”
Mitsunori Ueda;
Poster Presentations
Identification of slope failure areas is essential for solving geological hazard problems such as disaster prevention, urban planning and land development. Slope failure areas are identified by human interpretation using aerial photographs and satellite images. However, the manual interpretation has the problem of requiring a lot of time and labor. Furthermore, objectivity is insufficient due to differences in human interpretation. Therefore, there is a need for development of technology to automatically detect slope failure areas. Also, in order to understand the characteristics of the slope failure, it is necessary to investigate the relationship between the structures inside the slope failure areas, such as the scarp and main body. Slope failure is a phenomenon that creates spatially irregular terrain in a short time. In this study, we automatically identified slope failure areas with focus on temporal changes. Change Vector Analysis (CVA) and Random Forest Classifier (RFC) were used to identify scarp and main body in slope failure areas from changes in the Digital Elevation Models (DEMs) during two periods. CVA compares paired images from two different time periods and numerically analyzes the changes between the two periods by expressing them as the vector. In this study, changes in pairs of topographic features were analyzed using CVA. RFC was used to extract slope failure areas using change vectors as training data. The study area is located in a part of Tamba city in Hyogo prefecture, Japan. The target to be identified is the slope failure that occurred due to heavy rainfall in 2014. In this research, 1 m DEMs were used which was generated from airborne laser survey data acquired before and after slope failure event. Topographical characteristics were calculated from the DEMs at the two time periods and the pairs of each topographical features were combined. The results of the CVA showed different characteristics depending on the strength and direction of the change vector. The strength of the change vector suggested the possibility of determining the slope failure areas from the histograms. The direction of the change vector indicated that they could be useful for classifying scarp and main body in slope failure from the distribution of values. As the result of RFC, the learning features which contributed most to learning were the pair of terrain normal vectors and their variances, the second was the pair of elevation and Laplacian and the third was the pair of elevation and slope angle. The feature importance in RFC revealed that the geometric characteristics of the terrain significantly contributed to the classification, while the terrain conditions had a minimal impact on the learning process. The extraction accuracy was verified using the Cohen's kappa statistic. The result of accuracy was 0.75, which corresponds to “Substantial”. The results of extraction using the RFC were generally good. However, there were misinterpretations in the rivers and non-slope failure areas. This study focused on the changes in topographical characteristics from DEMs acquired before and after the event and identified scarp and main body in the slope failure areas. Identification of slope failure areas using topographical changes is useful for quickly generating the inventory maps of slope failure. Also, allows geological hazard mapping and updating, including location, shape, and morphology information of slope failure. Furthermore, revealing morphology of slope failure will be useful in predicting slope failures. The methodology described in this paper be applied to pre and post failure DEM derived from InSAR or high-resolution global DEM such as AW3D.
“An introduction to OGC API–Moving Features with pygeoapi and MobilityDB”
Wijae Cho, Taehoon Kim, Tsubasa Shimizu, TRAN THUAN BANG, Hirofumi Hayashi, Kyoungsook KIM;
Workshop Proposals
Moving feature data can represent a variety of phenomena, including the movements of vehicles, people, animals, and even weather changes. A moving feature is conceptually a geographic feature with dynamic properties over time. This means that a data model can cover not only locations but also non-spatial attributes. The data model can also support dynamic relationships over time between moving features.
OGC Moving Features standards are developed to provide application services for sharing and handling moving feature data in a standardized way. In particular, OGC MF-JSON (OGC 19-045r3) supports various types of moving feature representations in JSON format. OGC API–Moving Features–Part 1:Core (OGC API–MF Core) provides a standard and interoperable way to manage moving features data, which has valuable applications in transportation management, disaster response, environmental monitoring, and beyond. OGC API–MF Core also provides operations for filtering, sorting, and aggregating moving feature data based on location, time, and other properties.
This workshop will get you started with OGC API–MF Core and open source-based implementations, which are an extension of OGC API–Features. Specifically, the following items will be addressed in this workshop:
- Lectures
- Introduction of OGC Moving Features SWG
- Moving features conceptual data models
- OGC MF-JSON and OGC API–MF - Hands-on training
- MF-API Server extension and documentation with pygeoapi
- MobilityDB with OGC MF-JSON
- Visualization with STINUUM
The below open sources will be used in this workshop:
- Server: pygeoapi for supporting OGC API – MF, https://github.com/aistairc/pygeoapi-mf-api
- Database: MobilityDB (and its Python driver, PyMEOS), https://github.com/MobilityDB
- Client: STINUUM, https://github.com/aistairc/mf-cesium
Each program will be installed using a Docker file.
Lastly, you can check many helpful information about OGC API–MF here: https://github.com/opengeospatial/ogcapi-movingfeatures
“Basic Python for Geospatial”
Feye Andal, Fritz Dariel Andal;
Workshop Proposals
This workshop offers a comprehensive introduction to utilizing Python programming for geospatial analysis and visualization. Geospatial data is essential in various domains such as environmental sciences, urban planning, agriculture, and disaster management. This workshop aims to equip participants with foundational skills to harness the power of Python libraries and tools for handling, analyzing, and visualizing geospatial data.
By the end of the workshop, participants will have a solid grasp of the core principles of geospatial data handling using Python. They will be empowered to create their own geospatial projects, capable of ingesting, analyzing, and visualizing spatial data to derive meaningful analysis.
“Battle of The Best Street-Level Imagery Collection Tool: A Workshop on KartaView and Mapillary using 360 cameras”
Janica Kylle De Guzman, Ceejay T. Abilay;
Workshop Proposals
As the world increasingly goes digital, real-world information becomes accessible online, enabling virtual visits to locations through street-level imagery. This imagery is invaluable for capturing daily life and sharing local perspectives, making it useful for finding attractions or services remotely. Liminal spaces, which might seem trivial, can offer crucial insights for those in need of specific information. Our interactive workshop will introduce participants to KartaView and Mapillary, covering how to access and contribute to these platforms while having fun through demonstrations using GoPro 360 cameras. Open to all, this four-hour session includes a playful scavenger hunt, turning learning into an adventure as participants hone their skills in capturing and sharing street-level imagery.
“Build an Object Snap to a Geometric Location on Web Application”
Siriwat Suttipanyo, Siriya Saenkhom-or;
Workshop Proposals
Object snapping is a fundamental feature in Geographic Information Systems (GIS) that enhances the accuracy and efficiency of spatial data editing and analysis. This technique allows users to seamlessly align and connect geographic features, ensuring spatial relationships are maintained and data integrity is preserved. By snapping objects to predefined points, lines, or polygons, GIS professionals can create more precise maps and models, which is crucial for applications in urban planning, environmental management, and infrastructure development.
The process of object snapping involves algorithms that detect proximity between features and automatically adjust their positions based on user-defined criteria. This capability not only streamlines the editing process but also reduces the likelihood of errors arising from manual adjustments.
As web mapping technologies evolve, the need for intuitive and efficient tools becomes increasingly important. Implementing object snapping in web map applications not only streamlines the editing process but also ensures that spatial relationships are maintained, thereby enhancing the overall quality of geospatial data. This session will explore various methodologies for developing robust snapping algorithms with HTML and JavaScript, highlighting how these solutions can improve user experience and practical to implement.
For those looking to create a web map application capable of managing data for real-world tasks, such as adjusting the position of a streetlight to a specified area or managing objects to snap to geographic locations, this workshop will address those needs using practical HTML and JavaScript solutions.
“Building a urban digital twin using open data, open source, and open standard, a mago3D way!”
Haneul Yoo, Yeonhwa Jeong, Sanghee Shin, SUNGJIN KANG, Dawoon KIM, Seungmin Kwon;
Workshop Proposals
In this workshop titled "Building an Urban Digital Twin using Open Data, Open Source, Open Standards, a mago3D way!", participants will embark on a hands-on journey to create a digital twin of a selected urban area in Thailand. Leveraging open data from Overture Maps and NASA's 30m resolution Digital Elevation Model (DEM), participants will learn how to integrate and process these datasets using open-source tools like mago3DTiler and visualize the final output in a Cesium-based 3D environment.
The workshop will focus on using open standards, specifically the OGC’s 3D Tiles format, to ensure compatibility and interoperability across platforms. Participants will begin by downloading and processing building data from Overture Maps and terrain data from NASA. These datasets will then be converted into 3D Tiles using mago3DTiler, enabling detailed and accurate 3D representations of the urban environment. The final visualization step will be performed using Cesium, where participants can explore the digital twin in an interactive 3D space.
This workshop is designed for GIS professionals, urban planners, and developers interested in the creation of urban digital twins using open technologies. By the end of the session, participants will have a comprehensive understanding of how to create, process, and visualize 3D urban data using open resources and standards.
“Building an Analysis-Ready Cloud Optimized Global Lidar Data (GEDI and ICESat-2) for Earth System Science applications”
Yu-Feng Ho;
General Track
Global Ecosystem Dynamics Investigation (GEDI) and Ice, Cloud, and Land Elevation Satellite 2 (ICESat-2) are earth observation missions from NASA to construct a three-dimensional model of earth surface in space and time empowered by Light Detection and Ranging (LIDAR). GEDI and ICESat-2 data are organized by orbit ID, sub-orbit granule and track, and distributed in HDF5 format, which is optimized for big data storage. However, this approach is inconvenient for extracting spatio-temporal areas of interest, because each file stores a track crossing a huge range of latitude and longitude, while lacking a spatial index.
To facilitate random access to small areas of interest, we propose a data reconstruction process through Apache Parquet. Parquet is an open source column-oriented data format designed for efficient data storage and retrieval. We sequentially stream raw data into spatio-temporal partitioning blocks (5 degree x 5 degree x year). This layout optimizes the number of partitions (n = 3337) and individual file size (~300 MB). Independence of raw data files and a predefined partititoning scheme enables parallel processing, and periodic update while new data is available.
During the reconstruction, we selected essential attributes and applied quality filtering based on scientific literature. We excluded GEDI shots with Quality Flag equal to 0, Degradation Flag larger than 0, or Sensitivity smaller than 0.95; For ICESat-2 ATL08, we first excluded segments where terrain and/or canopy height are in NaN. We then reconstructed individual photons from ATL03 by ph_segement_id, and excluded the ones classified as noise, as well as segments containing more than 28 photons, according to the result from previous research [1].
Data is finally converted to GeoParquet and published on a cloud server under CC-BY 4.0 license. GEDI Level2 has 1.4 TB, and ICESat-2 ATL08 has 3.8TB in total size respectively. GeoParquet supports two levels of predicate push down: first, at the partition level, and second, at the file level. The partitioning of the global LiDAR datasets enables coarse spatial (5 x 5 degree) and temporal (year) filtering.. The footer of each GeoParquet file enables spatial filtering via bounding boxes or geometry features, and temporal filtering using the datetime columns. Further attribute filtering is possible.
The concept of Analysis-Ready Cloud Optimized (ARCO) data has been defined and implemented for raster data, using technologies such as Zarr or Cloud Optimized GeoTiff (COG) [2]. However, corresponding implementations for vector data are scarce. This work delivers two instances of global ARCO vector datasets. It not only adheres to the concept of 4C (complete, consistent, current, and correct), but also tackles the challenge of organizing terabyte-scale geospatial vector data.
Reference
Milenković, M., Reiche, J., Armston, J., Neuenschwander, A., De Keersmaecker, W., Herold, M., & Verbesselt, J. (2022). Assessing Amazon rainforest regrowth with GEDI and ICESat-2 data. Science of Remote Sensing, 5, 100051.
Stern, C., Abernathey, R., Hamman, J., Wegener, R., Lepore, C., Harkins, S., & Merose, A. (2022). Pangeo forge: crowdsourcing analysis-ready, cloud optimized data production. Frontiers in Climate, 3, 782909.
“Camera-LiDAR Fusion for multimodal 3D Object detection in Autonomous Vehicles”
Badri Raj Lamichhane;
General Track
The rapid development of autonomous vehicles (AVs) demands strong perception systems capable of reliably recognizing and classifying objects in complicated urban environments. To improve the reliability and precision of 3D object identification, combining camera and LiDAR sensors has emerged as a potential solution. This research describes a multimodal fusion framework that uses camera pictures and LiDAR point clouds to accomplish high-performance 3D object recognition in urban circumstances. Camera sensors provide precise color and texture information necessary for identifying traffic signs, pedestrians, and cars, whereas LiDAR provides exact depth measurements required for interpreting object geometry and spatial relationships. The indicated fusion technique improves detection accuracy by using these sensors' enhancing strengths, especially in difficult settings such as occlusions and fluctuating illumination conditions taking the KITTI open dataset. Here the OpenPCDet is used for 3D object detection and MMDetection for 2D detection as open library. Beside this the Facebook AI Research, Detectron2 flexible framework for 2D and 3D object detection tasks is more popular too.
Fusion is accomplished via a well built architecture that aligns and combines data from both modalities at various stages of the detection pipeline, such as feature extraction, region proposal, and classification. Advanced deep learning techniques, such as convolutional neural networks, are used to process and integrate multimodal input. Experimental results show that 3D object detection outperforms single-modality techniques in terms of robustness and precision, particularly when recognizing small and partially occluded objects.
“Catch them young! Geospatial Capacity building for School Children and Young adults”
Natraj Vaddadi;
General Track
Geospatial technologies are being widely used to address societal needs like land use, demographics, and natural resource management. Spatial data analysis play a crucial role in how we understand and interact with our environment. They involve using maps, GPS, and satellite images to collect, analyse, and display data about the world. Teaching these skills to school children has become increasingly important.
Geospatial techniques help students develop a better understanding of the world around them. Maps, for instance, are not just tools for finding directions; they tell stories about our environment, culture, and history. By learning to read and create maps, children begin to see the connections between different places and the events that shape them. Understanding these concepts early helps them develop a broader view of the world and how places are interconnected This kind of knowledge fosters a global perspective, encouraging students to think beyond their immediate surroundings and consider how their actions can affect the world.
As part of its mission to build awareness of the importance of Earth Science in daily life, the team at ‘The Centre for Education and Research in Geosciences (CERG), India conducts various activities aimed at laymen and schoolchildren. These events are conducted throughout the year. One such program is a workshop titled “Maps & Me”, which is focused on giving a basic understanding to school & college children on the geospatial world and open-source mapping tools. In the ‘Maps & Me’ workshop we explore the basics of maps, satellite images and digital maps and how to navigate using these tools.
The workshop is hands-on and interactive, covering key map elements like latitude, longitude, and scale, along with a session on using QGIS, a popular open-source mapping software. Participants are introduced to the fundamentals of remote sensing, satellite imagery, photo recognition, and digital mapping. After that, they get to create their own maps using QGIS.
At CERG we believe that such skills are important because they help students make sense of real-world issues, like climate change, urban planning, and natural resource management. By learning how to read and interpret maps, for example, young students can see how their local environment fits into the larger world. It also encourages them to think critically and solve problems creatively, skills that are valuable in all areas of life.
“Celebrating four decades of innovation: The GRASS GIS Project”
Markus Neteler;
Keynote Talk
The GRASS GIS project, a pioneering open-source geographic information system, celebrated its 40th anniversary in 2023. As one of the long-standing contributors, I am honored to reflect on the remarkable journey of this leading open-source geospatial software and community. Over the past four decades, GRASS GIS has grown from a modest project initiated by the U.S. Army Corps of Engineers to a robust, globally recognized platform for geospatial analysis and modeling. This evolution is a testament to the dedication and collaborative spirit of the GRASS community, which has continually driven innovation and excellence.
My personal relationship with GRASS GIS began over thirty years ago when I was a student and first encountered its powerful capabilities. Even then, I was fascinated by its potential to revolutionize spatial analysis and environmental modeling. With the advent of the Internet, we were able to build a passionate community behind the project. Through collaborative efforts, we have significantly expanded the functionality of GRASS GIS, improved its user interface through multiple iterations, and ensured its adaptability to the ever-changing technological landscape.
In this keynote, I will reflect on the milestones that have shaped GRASS GIS from its inception at the U.S. Army Corps of Engineers' Construction Engineering Research Laboratory (USA/CERL) to its current status as a cornerstone of the open-source geospatial ecosystem. The latest releases of GRASS GIS include thousands of changes, including the new single-window GUI layout and enhanced parallelization capabilities. These enhancements underscore our commitment to improving the user experience and computational efficiency. The past decade has also been marked by vibrant community engagement through the OSGeo Foundation. I will highlight key contributions from the global community, showcase groundbreaking research and applications, touch on FOSS business models, and explore the challenges we have overcome along the way.
The future of GRASS GIS is bright as we anticipate further innovation and expanded applications, driven by the same collaborative ethos that has defined our past. Together we will continue to push the boundaries of what is possible in geospatial analysis, ensuring that GRASS GIS remains at the forefront of this dynamic field.
“Comparitive Evaluation of Machine Learning Models for Zoning Slope Failure Suceptibility: A Case study of Yen Bai Province, Vietnam”
Tran Tung Lam, Tatsuya Nemoto, Venkatesh Raghavan, Xuan Quang Truong;
General Track
Yen Bai Province in northern Vietnam, especially Mu Cang Chai (MCC) and Van Yen (VY) districts, are highly susceptible to slope failure due to rugged terrain, high rainfall and anthropogenic activities . In this research MCC was used as an area for training and testing the machine learning models, while VY serves for model validation due to similar topographic and geological conditions.
The methodology treats the slope failure prediction as a binary classification task (landslide/no-landslide). A balanced dataset of 286 landslide and 286 non-landslide points in MCC, along with 16 contributing factors, including topographic, geologic, hydrologic, anthropogenic and vegetation factors calculated from open data sources and made use from existing databases and from previous research on the area. Principal Component Analysis (PCA) and Pearson Correlation Coefficients refine the dataset by evaluating correlated factors and removing the least important ones, the size of the training dataset can be reduced while ensuring the performance of the ML models. Four ML models: Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), and Extreme Gradient Boosting (XGBoost) are trained and evaluated to select the best hyperparameter tuning for each model. Model accuracy is assessed via confusion matrices, accuracy score, ROC (Receiver operating characteristic) curves and AUC (Area under the ROC Curve).
Results show the models perform effectively in MCC with the average accuracy of all models being 0.74. The trained ML models with tuned hyper-parameters after running on MCC data, was validated on datasets for VY. The VY data also consists of 16 factors, with a data set of 308 landslide/non-landslide points. RF and XGBoost have the highest accuracy for both training and testing area (MCC) and Validation area (VY), with XGBoost showing a slightly higher accuracy score of 0.83 while RF scores 0.80.
The XGBoost model produces good results and could be further optimized to achieve even better zonation in future studies. The machine learning workflow can be applied on other areas that are prone to slope failures. Other geologic and weathering factors could be included in the analysis to further improve the model.
“Design and Deploy Microservice for GIS Application apply OGC Standard”
Worrathep Somboonrungrod;
General Track
In the past, the installation of GIS applications often encountered challenges regarding the flexibility of services, which were unable to be scaled to accommodate a growing number of users. The interconnection and exchanging of data across services were constrained, and service separation was not feasible. These issues had a significant impact on overall usability.
The design and deployment of applications in the form of microservices are gaining popularity and widespread adoption. This approach aims to provide flexibility to the installation process, allowing services to be added or reduced as needed to align with usage requirements. It can subdivide services into smaller units to facilitate installation, following the principles outlined in The Twelve-Factor App (https://12factor.net/).
Nowadays, GIS application development has OGC Standards, which are standardized guidelines that define the process of storing and providing geospatial data. These standards encompass a multitude of aspects of geospatial information interoperability. Therefore, the principles of The Twelve-Factor App can be adapted to the design and deployment of GIS applications, ensuring compliance with OGC Standards.
This session will elaborate on how the principles of The Twelve-Factor App can be harmonized with OGC Standards, as well as the various technologies selected for the design and deployment of applications.
“Designing user experience and user interface for effective map applications.”
jirayut Narksin, Nichaphat Hongkeaw, Mayurachat Saechan;
General Track
Geographic information technology has become an important part of our daily lives, leading to the development of various map applications. The design of these applications requires special attention, particularly in terms of User Experience (UX) and User Interface (UI) design. Effective UX/UI design significantly impacts user satisfaction and ease of use, contributing to the overall efficiency and modernity of map applications. By understanding user needs through User Research and Usability Testing, we can establish principles and guidelines for creative design that enhance usability and ensure consistency between data presentation and user interaction.
Creating a map application that offers a good user experience involves careful design of map elements, such as the layout of basic map tools, the selection of symbols, the arrangement of data, and the design of interactions. The design must be suitable for different types of usage, such as travel applications, survey applications, or application for specific use. Additionally, it must support various devices, including mobile phones, computers, tablets, and other devices with different screen sizes.
Furthermore, techniques that can be applied in the design process are presented in this session to achieve the best results. This includes creating an immersive user experience by strategically using colors, fonts, and layout. Developing engaging and interesting ways to display information will help users feel more connected to the application. It is also important to stay updated with current trends in map application design to ensure that the developed applications are modern and responsive to global changes.
This presentation is suitable for designers, developers, and anyone interested in geographic information technology, especially those involved in developing map applications. It will provide design guidelines that can be applied to various projects, effectively meeting the diverse needs of future users.
“Developing a Web-Based Spatial Decision Support System (SDSS) Using Geoserver”
CHANDAN M C;
Workshop Proposals
This hands-on workshop delves into the creation of a web-based Spatial Decision Support System (SDSS) from the ground up, utilizing Geoserver as a key tool. SDSS development involves the integration of conventional and spatially referenced data, decision logic, and a web-based interface for spatial data analysis. The SDSS architecture comprises components such as Web Processing Service (WPS), Web Feature Service (WFS), Web Mapping Service (WMS), Geoserver/Map-server, and Geo-processing.
Participants will learn how to retrieve map features from a database, encode raw data into defined layers, and assess these layers within the core DSS. Sensitivity analysis aids in selecting the optimal alternative through a decision-making process. The resulting outputs are visualized through styled layers and a user-friendly graphical interface.
The workshop also explores the role of web servers in serving web content, processing HTTP requests, and delivering web pages, including HTML documents, images, style sheets, and scripts. Geoserver, an open-source Java-based software, is employed to view, share, and store spatial data on the web. It supports various spatial data formats and provides interoperability to publish data from diverse sources using open standards.
By the end of this workshop, participants will possess the skills to construct a robust web-based SDSS, empowering them to make informed spatial decisions using Geoserver and other essential web development tools.
“Development of 3D Mapping Library to Facilitate Photo Alignment with 3D Models”
Daisuke Yoshida, Naoki Ueda, Benjamin Palsa Leamon;
Poster Presentations
In our laboratory, we are conducting research in collaboration with several municipalities to promote digital transformation (DX) in infrastructure maintenance by leveraging new technologies such as drones and deep learning. At the same time, we are broadly applying the research results to fields such as cultural heritage preservation. One of our past initiatives involved measuring the exterior and interior of Kishiwada Castle with multiple 3D laser scanners and making the resulting 3D data available as open data.
In our research on infrastructure maintenance, we are developing a web-based system that allows for the 3D management of infrastructure defects by mapping aerial photographs in 3D onto 3D point cloud data and 3D models obtained from drone surveys. To accurately align drone aerial images within a 3D space of real-world coordinates, both in terms of position and angle, requires advanced technology and a significant amount of labor. In this research, to automate and simplify this process to some extent, we have developed a 3D mapping library based on CesiumJS and introduced an example of mapping aerial photographs in 3D onto a 3D model of Kishiwada Castle with real-world coordinates.
By making the process of "photo alignment," which previously required extensive know-how and labor, more user-friendly, we believe that we can significantly reduce the burden on content creators not only in the field of infrastructure maintenance but also in various 3D content fields such as education (creating 3D teaching materials for geography, regional studies, and disaster prevention education) and regional revitalization (creating 3D content for virtual tours).
In the future, in addition to improving performance through revising the source code and enhancing the system’s performance, we plan to make design improvements to make it more intuitive to use and release it as open-source software.
“Development of Large-scale Trip Analysis Toolkits for Vehicle-based GPS Trajectories using Apache Spark and Open Data: A Case Study of Taxis in Bangkok, Thailand”
Apichon;
Academic Track (Oral)
Urban planning and mobility analysis have traditionally been studied through observation or questionnaires, which can be time-consuming and costly. However, the rapid advancement of technology has enabled tracking devices to be installed in individual vehicles, allowing the measurement of various values, particularly global positioning system (GPS) signals.
The location data collected is accurate, regularly updated, and can offer valuable insights into people's movements and behavior. Because the amount of trajectory data is substantial and continues to increase over time, specialized platforms and skills are needed for its analysis.
In this study, we developed large-scale analysis toolkits to extract insights, including trip statistics, origin–destination analysis, and hotspot identification from vehicle-based GPS trajectories. The toolkits are specifically designed to handle large-scale datasets using Apache Spark, an analytics engine capable of processing large volumes of data by distributing tasks across a Hadoop cluster for efficient processing.
Algorithms for the analytics model were created to reconstruct trips based on their type of mobility, and trip locations were mapped using open data such as administrative boundaries and points of interest. We then verified our approach using real-world taxi data from Bangkok, Thailand.
The results revealed that taxis had more vacant trips than busy trips, and the travel time and distance taken to search for passengers were longer than those taken to pick them up and drop them off. Taxi activity was concentrated in the city center and nearby areas, particularly those within the vicinity of transport-connecting hubs. Taxi stay hotspots were mainly areas near tourist attractions and parking hubs.
Furthermore, we found that the processing performance of the proposed approach increased with the number of executor cores. This study comprehensively presented information on taxi travel patterns, service availability, hotspots, and processing performance using the developed trip analysis toolkits.
“Down the FOSS4G-Asia Memory Lane”
Toru MORI;
Keynote Talk
first ventured into the world of open-source GIS in 2003. Back then, the landscape was much different—GRASS GIS and MapServer were known names, but familiar tools like QGIS and PostGIS were still in their infancy, with limited functionality and a small user base.
It was in this early era that Dr. Venkatesh Raghavan coined the term FOSS4G—a milestone now celebrating its 20th anniversary since that pivotal moment in 2004. Just a few months later, in September, the world’s first FOSS4G conference was held at Chulalongkorn University in Bangkok, where Europe’s GRASS GIS community met the MapServer community from North America. This historic gathering laid the groundwork for the creation of the OSGeo Foundation.
In this keynote, I will take you down memory lane, sharing the story of how FOSS4G was born and how it evolved into the vibrant ecosystem we know today. Join me as I recount these foundational moments through storytelling, from a perspective shaped by those pioneering days.
“EIA(Environmental Impact Assessment) combined with 3D opensource geospatial.”
Hakjoon Kim, Yooyeol Yim, Taiyoung Kim;
General Track
This talk presents a research case of an open source implementation of a task management system based on 3D spatial information web service to efficiently conduct and manage tasks in the field of environmental impact assessment (EIA), which combines very diverse specialties.
“Empowering Citizen Scientists for Safer and Resilient Communities: A Story of Creating a Metro Manila Climate and Disaster Risk Atlas through QGIS and OSM”
Janica Kylle De Guzman, Ceejay T. Abilay;
General Track
Disasters like typhoons, earthquakes, and flooding are inevitable, especially as climate change intensifies. This heightens the need for effective information sharing, which some governments have addressed by sending out timely digital alerts. While large organizations work to prepare communities for disasters, local knowledge often gets overlooked despite its critical role in understanding a hazard’s impact. Residents possess intimate knowledge of their surroundings, which becomes invaluable during emergencies for identifying evacuation routes and understanding the landscape. In creating a disaster risk assessment, selecting data sources is crucial, as demonstrated by the Metro Manila Climate and Disaster Risk Atlas. This atlas, created using QGIS and OSM, assesses hazards in Metro Manila—a region prone to a potential 7.2 magnitude earthquake due to the West Valley Fault System. Leveraging these tools, provides a comprehensive view of risks, empowering communities with vital information about the vulnerabilities and resilience of their locales. The project exemplifies how integrating local and digital knowledge fosters a safer, more prepared society.
“Enjoying Delicious Meals Using MapLibre: A Journey into Developing a MapLibre Module”
Shinsuke Nakamori;
General Track
In this session, I will introduce a module I developed for clustering icon image markers using MapLibre, with the theme of "finding nearby restaurants that look delicious." This is the first module I’ve ever created, and it’s designed with a very simple structure. Through this session, I hope participants will take away two key messages: that creating what you want to make is enjoyable, and that even if you’re not highly skilled, sharing your work can lead to valuable learning experiences.
“Expectation Testing for Web Map Applications”
Parichat Namwichian;
General Track
A part of the web map application, ensuring functionality and accuracy is paramount to delivering a reliable user experience. This abstract outlines a systematic approach to expectation testing for web maps, focusing on validating that these applications meet predefined criteria and user expectations.
Expectation testing involves setting specific criteria that web map applications must meet to ensure they perform as intended. This process includes validating map data accuracy, user interface responsiveness, and the effectiveness of interactive features such as zooming, panning, and layer management. The goal is to ensure that the web map not only displays information correctly but also responds appropriately to user interactions and integrates seamlessly with other system components.
By implementing a robust expectation testing framework, developers can identify and address potential issues before deployment, ensuring that web map applications deliver accurate and efficient performance. This process not only enhances the quality of the application but also builds user trust by meeting or exceeding their expectations.
“Experience Digital Twin applications enhanced by AI prompts”
Hanjin Lee, Hyeeun Ahn, Jaeseon Kim, Heejin Ha, Sanghee Shin;
General Track
The range of AI applications in the geospatial information field is diverse, object detection, area extraction, change detection, and super-resolution. In our industry, we collectively refer to these technologies as GeoAI.
However, these technologies remain primarily confined to specialized groups, making it difficult to consider them as universal technologies ready for everyday use.
In light of this, we sought to explore areas that could be easily accessible to the general public. Consequently, we developed 'magoGPT', a application enabling users to manipulate maps and interact with 3D objects using natural language in a digital twin environment.
Based on a 3D FOSS architecture composed of CesiumJS, it uses 3DTiles buildings, Terrain from high-resolution DEMs, and multiple layers visualisation. It is also related to artificial intelligence techniques such as Large Language Models (LLM), Speech-to-Text (STT), and Natural Language Processing (NLP).
In this presentation, we'd like to introduce magoGPT, the technology behind it, and the development process.
“Exploring Segment Anything Model's Potential in Geospatial Data: Case Studies for Landslide and Forest Canopy Detection”
Nobusuke Iwasaki, Ayaka Onohara;
General Track
In recent years, the availability of geospatial data has significantly increased, with aerial photographs and Digital Elevation Models (DEMs) becoming widely accessible as open data. Additionally, the acquisition of high-resolution image data through drones has become more feasible and commonplace. However, extracting meaningful information from these vast datasets remains a labor-intensive process, often requiring significant time and resources.
While various deep learning techniques have been employed to address this challenge, they typically demand extensive effort in collecting and preparing training data. In light of these constraints, the Segment Anything Model (SAM) has emerged as a promising solution. SAM, a recent development in the field of fundamental models, offers the advantage of zero-shot classification without the need for specific training. Moreover, its Apache-2.0 license ensures accessibility for a wide range of applications.
This presentation aims to demonstrate the potential of SAM in the realm of geospatial information processing. We will showcase practical applications of SAM in analyzing geospatial data, with a focus on two critical areas: landslide detection and forest canopy mapping. These case studies will illustrate how SAM can efficiently process and extract valuable insights from complex geospatial datasets, potentially enhancing the efficiency and effectiveness of our approach to environmental monitoring and disaster risk assessment.
“Flood Inundation model using itzi distributed hydroligic modeling tool: A case study in Phayao Province, Thailand”
Rhutairat Hataitara;
Poster Presentations
Flooding is a natural disaster that can seriously harm people and property. It particularly impacts populations and infrastructure in places that are vulnerable to seasonal high rains, climate change, and extreme weather. This research focuses on the analysis of the flood situation in the study area of Mae Ka Subdistrict, Mueang Phayao District, Phayao Province, which is located in the northern part of Thailand. Using the Itzi flood simulation tool, an open-source software, to model surface water flow and accumulation processes. This tool enables simulation of various flood scenarios, such as heavy rainfall flooding, assessment of flood risk areas, and forecasting the response of flood-affected areas.
This research began by collecting topographic data of the study area, including rainfall data and other environmental factors that affect flooding. After that, Mae Ka Subdistrict's possible flooding was simulated using Itzi for the purpose of analyzing the risk level and the impact on infrastructure. The simulation results show the flow patterns of water, which allows for a clear assessment of risk, including the identification of areas at risk that may be most severely affected. The analysis of data from simulations can lead to the development of future flood prevention and risk reduction plans. By identifying water flow paths and accumulation areas that can lead to solutions or infrastructure preparations.
“FOSS4G and Sustainable Development Goals in Asia and the Pacific”
Hamid Mehmood;
Keynote Talk
Large Language Models (LLMs) have revolutionized numerous aspects of modern life, demonstrating remarkable capabilities in language processing, code generation, and knowledge synthesis. Their potential extends to supporting the achievement of various Sustainable Development Goals (SDGs), offering innovative approaches to tackling complex global challenges. One promising application area is the mapping and monitoring phenomena measurable through Earth Observation (EO) data. It's estimated that around 40 of the 169 SDG targets and 30 of the 232 SDG indicators could benefit from the insights provided by the EO data analysis. The use of artificial intelligence (AI) for EO data analysis can further improve the number of SDG indicators that can be monitored with a higher accuracy and frequency.
In this context, research is underway to develop multimodal LLMs capable of directly processing EO data. However, these models are often computationally expensive to train, develop, and maintain, making them less feasible for low-capacity, high-risk countries that urgently need technological solutions for disaster mitigation. To address this challenge, we introduce SATGPT (accessible at satgpt.net), an innovative solution that leverages the current capabilities of LLMs and integrates them with cloud computing platforms and EO data. SATGPT represents a fully functional, innovative spatial decision support system designed for rapid deployment, particularly in resource-limited contexts.
This talk presents an instance of SATGPT configured for flood mapping, as an example. It simplifies the process with a user-friendly interface requiring only a prompt specifying flood duration and location. SATGPT leverages LLMs to generate GEE code dynamically, access historical databases, or perform unsupervised classification to detect flooded areas. This innovative integration of LLMs with GEE enhances the speed, accessibility, and real-time capabilities of flood mapping, making it more accessible to non-specialists and supporting resilient disaster management practices.
Furthermore, to build the capacity to use these technologies effectively, the talk discusses the development of an online, free, and self-paced course titled "Introduction to Geospatial Data Analysis with ChatGPT and Google Earth Engine." This course introduces participants to the fundamentals of ChatGPT and the Earth Engine Code Editor platform, empowering them to process and interpret geospatial data effectively outside the SATGPT. The innovative aspect of the course is developing the Geo-prompt engineering (GPE) concept, which focuses on using spatial, temporal, and satellite sensor- specific information in the prompt engineering process. The course aims to foster broader adoption of SATGPT and similar tools, equipping users with the knowledge and skills needed to leverage advanced technologies in disaster management. This talk is structured to provide a comprehensive overview of SATGPT and its contribution to enhancing flood mapping and disaster management in the Asia-Pacific region.
“From Complexity to Clarity: An Intuitive 3D Map Application Development Experience with Cesium and Svelte”
SUDA;
General Track
This session will explore how combining the powerful 3D mapping capabilities of Cesium with the intuitive and efficient web framework Svelte can simplify the development of 3D map applications. We will demonstrate practical examples, such as data binding and custom stores, to show how these tools can make working with Cesium more straightforward and intuitive. Participants will gain new insights into using Svelte and Cesium together, making complex 3D geospatial projects more accessible and manageable. This session is ideal for frontend engineers looking to enhance their development experience in the growing field of 3D mapping.
“From Data to Insights: The Impact of Generative AI on IoT-Based Environmental Monitoring”
Dongpo Deng;
General Track
The integration of Internet of Things (IoT) technology with environmental monitoring systems has significantly enhanced the ability to collect real-time data from diverse and remote locations. However, the challenge lies in efficiently analyzing and interpreting this vast amount of data to make informed decisions. This paper explores the application of generative AI in IoT data analysis for environmental monitoring. Generative AI, with their advanced natural language processing capabilities, offer a novel approach to processing and understanding complex data patterns. By leveraging Generative AI, it is possible to automate the identification of critical environmental changes, predict trends, and provide actionable insights with unprecedented accuracy. This study demonstrates how Generative AI can enhance data analytics in environmental monitoring through case studies that highlight improvements in air quality assessment (e.g. PM 2.5). The findings suggest that Generative AI not only streamline the data analysis process but also enhance the reliability and responsiveness of environmental monitoring systems. Consequently, this research underscores the potential of Generative AI to transform IoT-based environmental monitoring, promoting more proactive and effective environmental management practices.
“GeoCambodia: A web application to visualize Cambodia Then and Now through aerial photographs and satellite images.”
Chamroeun YORNGSOK;
General Track
The French National Geographic Institute (IGN) came to Cambodia between 1952 and 1954 to carry out a large-scale photographic project, taking around 11,000 aerial images over a large part of the country.
Today, this collection represents an exceptional archive that takes us back 70 years to the urban and rural landscapes of the time. In addition to these early aerial photographs, there is a higher resolution shot of Phnom Penh in 1993, with incredible details of life in the streets of the capital. This archive is of great interest in many fields, including history, geography, archaeology, urban planning, and ecology. The purpose of the project is to dematerialize this archive and make it accessible and usable free of charge in Cambodia.
All the images were digitized and supplied to the KHmer Earth OBServation (KHEOBS) laboratory to create orthophotographs. Processing has been completed for the municipality of Phnom Penh, for the years 1953 and 1993. In order to make these images viewable by anyone, a web application, GeoCambodia, was developed to visualize Cambodia then (past) and now (present). The user-friendly interface includes an interactive slider to navigate and compare the old 1953 and 1993 aerial orthophotographs with the recent Google Earth images. Also, vector outlines of buildings from 1993, produced by the Atelier Parisien d’Urbanisme (APUR), have been integrated to enable visitors to click on a building and view the APUR’s descriptive architectural sheets. Other functions and the extension of the aerial images to the whole of Cambodia are still to come in this interface.
GeoCambodia.org targets anyone who is interested in aerial and satellite imagery and how Cambodia evolves through time and space, especially geography enthusiasts.
“Geographic object-based image analysis with Orfeo toolbox for detecting illegal cultivation on public land”
Yong Huh;
Poster Presentations
The illegal use of public lands poses significant environmental and economic problems to environmental conservation and land management. This study employs publicly available spatial data provided by government agencies and QGIS with the Orfeo Toolbox (OTB) to detect and monitor the illegal activities. By integrating high-resolution aerial imagery with Geographic Object-Based Image Analysis (GEOBIA) provided by the OTB, land use activities such as construction of buildings or cultivation of crops could be extracted from the imagery, then be compared to the land management spatial data in the QGIS environment. The study method includes segmenting the imagery into meaningful spatial objects with GEOBIA technique, accessing public administration data, geo-referencing the data into spatial data and comparing the objects and the spatial data with several spatial query with QGIS functions to identify illegal activities on public lands. The proposed method was applied to the Gangwon region in South Korea, and as a result of evaluating the accuracy, the possibility of automating public land management with public spatial data provided the government was confirmed.
“Geospatial climate and environmental monitoring for health surveillance”
Vincent HERBRETEAU;
General Track
Introduction/Background:
The consequences of climate and environmental changes on health are now obvious to communities, institutions and researchers alike. The impact of these changes must now be considered in an operational way in health management, in order to anticipate their effects, prevent them or mitigate them where possible. In practice, there is very little routine real-time use of space observation data in the public health sector, despite the increasing availability of space data. Indeed, space observation technologies have been constantly evolving since the 1970s, and now offer a wide range of data at different spatial, temporal and radiometric resolutions. More recently, access to data acquired by satellite has greatly improved, with free, massive data and easier processing. This offers the possibility of supporting health surveillance at various scales, which will be explored in this presentation using different examples from South-East Asia.
Main Aim/Purpose:
This presentation aims at raising the issue of integrating environmental and climate change indicators into health monitoring, by presenting practical tools and case studies that are operational in Southeast Asia. It will provide an update on the needs for further operational implementation that will benefit health monitoring.
Methodology and Findings:
The presentation will focus firstly on the development of a web platform that aims at modeling suitable climate and environmental conditions for leptospirosis through Earth observation, over the agglomeration of Yangon, Myanmar. Leptospirosis is a bacterial zoonosis that remains rarely diagnosed in Southeast Asia despite a high morbidity as shown in several active investigations. It is strongly associated to water and seasons with epidemics following heavy rainfall and flooding episodes. In the frame of the ECOMORE 2 Project (coordinated by Institut Pasteur and funded by the French Agency for Development - AFD), the locations of leptospirosis confirmed cases (vs non-leptospirosis controls) included in 2019 and 2020 were analyzed retrospectively. Time series of vegetation, water, and moisture indices from Sentinel-2 satellite imagery (available at 10 meters spatial resolution, every 5 days, from the European Space Agency, Copernicus Program) were produced to describe the dynamics of the environment around the locations of residence. This process relies on the use of the Sen2Chain processing chain developed in Python and open (https://framagit.org/espace-dev/sen2chain). The most relevant indices were used to build a spatiotemporal prediction model of positive vs negative locations. This model was spatialized on homogeneous landscape units from the point of view of land use, and describing the whole study area. The acquisition of Sentinel-2 images, their processing and the modelling were then automated as soon as a new image is available (every 5 days). An online platform, named LeptoYangon (https://leptoyangon.geohealthresearch.org/), was developed with R and R-Shiny to display this dynamic mapping of suitable environments and inform the epidemiologists and physicians of the study, in the frame of the ClimHealth project (funded by CNES and accredited by the Space Climate Observatory International Initiative). This fully automated tool allows retrospective consultation at any date since the first Sentinel-2 image was available in March 2016 (over 7 years). By clicking on the map, the user can select a landscape unit and view the temporal dynamics of the risk for that unit (i.e. whether the risk is increasing and decreasing). The user can also view the vegetation, water and moisture indicators to get an idea of the environmental data more specifically. This platform was designed to be used by epidemiologists and physicians to visualize the most at-risk areas and those where the risk is increasing, in order to raise physicians' awareness of leptospirosis (often confused with other fevers).
Discussion:
Implementing this tool in other territories is facing 1) methodological challenges regarding the volume of satellite data to be processed and 2) the need for detailed knowledge of the ecology of leptospirosis and exposure factors to adapt the models in different contexts. However, this already operational tool opens the way to the development of climate and environmental monitoring systems to increase the vigilance of healthcare workers and populations to the risk of leptospirosis. This also shows the relevance of developing specific tools for other diseases associated to climate and environment. At a country or regional scale, it is mainly meteorological variations and climatic anomalies that are relevant to the surveillance of certain diseases, such as dengue fever. The presentation will finally review the development of a national early warning system in Cambodia based on the acquisition of such climatic data.
“Hazard Map Game: Learn and Play with Open Data for a New Approach to Disaster Preparedness for Kids”
SUDA;
Poster Presentations
This presentation introduces an innovative educational game designed to teach children how to assess disaster risks based on geographical features and hazard maps. Utilizing open data and interactive digital signage, the "Hazard Map Game" transforms traditional paper-based hazard map education into an engaging, digital learning experience. Through this game, children can intuitively learn about various disaster risks such as tsunamis, floods, and landslides, while competing in quizzes and earning points. The game aims to deepen children's understanding of disaster preparedness and risk assessment, fostering a generation better prepared to manage natural hazards.
“Historical Analysis of Post-Monsoon Rice Fields in Myanmar with Optical and Radar Data”
Hafsah Fatihul Ilmy, Sarah Kanee, Daniel Marc dela Torre;
Poster Presentations
Mapping and tracking rice cultivation is crucial for agricultural planning and food security, particularly in Myanmar, where rice is one of the major crops. Myanmar has been in conflict in recent years, making a large number of its population vulnerable to food insecurity. However, the political situation has made it difficult to acquire reliable estimates of agricultural production in several areas of the country. Alternative sources of information, such as satellite imagery and remote sensing, are needed to supply accurate data for crop management and to aid humanitarian agencies in prioritizing the distribution of food aid and better support for affected communities.
This study leverages Google Earth Engine’s open data and tools to map post-monsoon rice fields in Myanmar using optical and radar data from Sentinel-1 and Sentinel-2 from 2018 to 2021. The primary objective is to generate comprehensive maps of rice fields, revealing patterns and changes in rice production over the years during the post-monsoon season. This analysis provides insights into the impacts of climate variability and agricultural policies, aiming to support sustainable practices and enhance food security.
The study focused on eight main rice-growing states and regions in Myanmar, analyzing rice phenology and cultivation practices. The methodology involved combining radar data from Sentinel-1, which penetrates clouds and aids in detecting rice phenology, with optical data from Sentinel-2, which offers spectral information for identifying vegetation and understanding growth stages. The satellite imagery underwent preprocessing to align spatially, reduce cloud cover, and correct atmospheric effects.
Using local knowledge of rice, training datasets were prepared and interpreted in Google Earth Engine and supplemented with limited field campaigns. To help capture growth stages and distinguish rice fields from other crops, spectral indices — such as the Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Water Index (NDWI) — were included as well as topographical data such as elevation and slope.
A random forest classifier was then trained to create rice probability maps, and a probability threshold of greater than 60 percent was used to determine rice growth. The model demonstrated an average overall accuracy of approximately 87 percent. The estimated rice area was 1,265,003 hectares in 2018, 1,340,308 hectares in 2019, 1,197,913 hectares in 2020, and 1,175,431 hectares in 2021. According to our estimates, Ayeyarwady consistently produces around 400,000 hectares of rice a year, making it the region with the highest rice production. The model results are comparable to published government figures, providing additional validation for this method as a reliable and efficient way to monitor rice production. The study revealed decreasing areas in post-monsoon rice production with notable exceptions that may be attributed to climate variability, transplanting timing, and market shifts. Understanding these trends is essential for developing adaptive strategies that can mitigate the impacts of these factors on rice production and ensure food security. This work was implemented under the SERVIR Southeast Asia program, a joint USAID and NASA initiative. To promote transparency and accessibility, seasonal rice area estimates are published and available at SERVIR ADPC Publications (https://servir.adpc.net/publications). Additionally, rice maps can be accessed through the Myanmar Landscape Monitoring Dashboard (https://myanmar-me-servir.adpc.net), a public portal designed to disseminate this crucial information. The integration of optical and radar imagery in Google Earth Engine provides an effective approach for detecting post-monsoon rice and underscores the benefits of open-access data for advancing geospatial analysis and promoting sustainable agricultural practices, most importantly in data-scare or conflict-affected regions. This approach could also offer a scalable and replicable model for other regions facing similar challenges. The use of advanced remote sensing technologies and machine learning algorithms represents a significant step forward in agricultural monitoring and planning, paving the way for more resilient and sustainable food systems.
“How a FOSS4G-Born Business Grew in Japan: The Journey of MIERUNE”
Yasuto FURUKAWA;
General Track
When considering the sustainability of open source as a future digital public good, the cycle of success and contribution in the business sector becomes crucial.
MIERUNE was born from the FOSS4G community in 2016 and has spent the past eight years solving client challenges as a location-based systems integrator, growing steadily in the Japanese market.
Through these business activities, MIERUNE not only actively gives back to the FOSS4G community—both technically and financially—but also creates local employment for GIS engineers, helping to build a sustainable society.
In this presentation, we will share specific examples of the challenges we have faced to support the growth and development of FOSS4G companies in the Asian region, thereby contributing to the sustainability of the whole community.
“Hydrogeophysical Analysis of Vertical Electrical Soundings for Groundwater Potential and Aquifer Vulnerability Evaluation in the Federal Capital Territory, Abuja, Nigeria”
DANLAMI IBRAHIM;
Academic Track (Oral)
According to the United Nations World Water Development Report, groundwater accounts for 26% of the world's renewable freshwater, with around 2.5 billion people relying primarily on it for basic water needs. The most realistic and cost-effective strategy to increase universal access to clean water, meet the 2030 sustainable development goals (SDG), and minimize climate change impacts is broad exploitation and management of groundwater. The study area is Nigeria's capital Abuja, generally characterized by moderate precipitation and few surface water sources. The water treatment plant, designed with a capacity of 10,000 cubic meters per hour of treated water, was aimed at supporting a population of 500,000 people 34 years ago. However, due to population growth and urbanization, the water supply is no longer meeting demand. Groundwater demand and consumption in Abuja have increased significantly over the last decade due to fast population expansion, urbanization, and industrialization. Understanding groundwater potential and aquifer vulnerability is critical for sustainable resource management.
Geologically, Abuja is underlain by Precambrian rocks of the Nigerian Basement Complex, which cover approximately 85% of the land surface, and sedimentary rocks, which cover approximately 15%. In the study area, four significant lithologic units are visible; these include the Older Granites, the Metasediments/Metavolcanics, the Migmatite-Gneiss Complex, and the Nupe sandstones of the Bida Basin, which occupies the southwestern region of the territory.
This study aims to map groundwater potential and aquifer vulnerability zones using Hydrogeophysical method, which incorporates geoelectrical resistivity through vertical electrical sounding (VES) and geographic information system (GIS) approaches. With a maximum current electrode separation (AB/2) of 100m, the Schlumberger electrode configuration was used to acquire the field resistivity data in 823 locations across the study area using a DC resistivity meter (Campus Ohmega Ω).
The resistivity method works by passing an electric current into the ground through two electrodes and measuring the consequent potential difference across two other electrodes. The electrode spacing gradually increases while the electrode array's center point remains fixed. However, as the current electrode spacing grows, the current penetrates deeper into the ground, and the apparent resistivity reflects the resistivity of the deeper layers as well. The resistance is estimated as the ratio of potential difference to current in ohms(Ω). Using a global positioning system (GPS), the absolute coordinates of the survey points (VES) were determined.
Three to five subsurface geoelectrical layers were identified in the research area with the aid of IPI2Win software. Vertical electrical sounding (VES) data are often interpreted using the IPI2Win software, which is a user-friendly geophysical software designed to process resistivity data and generate one-dimensional models of subsurface layers Layer resistivity and thickness were estimated using the software by iterating the model with the observed field data acquired using the Schlumberger array. The H-type sounding curve is the most dominant among the identified curve types.
The interpreted data were used to determine parameters including Depth to Bedrock, Transverse Resistance, Longitudinal Conductance, Reflection Coefficient, and Layer resistivity. Using scaling criteria, the longitudinal conductance was used to determine the aquifer protective Capacity (Vulnerability), and the result revealed the dominance of moderate vulnerability across the study area.
The groundwater potential zones in the research area were characterized based on the following criteria as established by previous authors in this field: Areas with overburden thickness ≥ 30m and reflection coefficient < 0.8 were classified as very high groundwater potential; Areas with overburden thickness ≥13m and reflection coefficient < 0.8 were classified as High groundwater potential; Areas with overburden thickness ≥ 13m and reflection coefficient ≥ 0.8 were classified as moderate potential while areas with overburden thickness <13m and reflection coefficient ≥ 0.8 were classified as low potential, and finally, areas with overburden thickness >13m and reflection coefficient < 0.8. were classified as very low potential. These criteria were written as Python codes that classify the area into five groundwater potential zones. The area covered by each zone was calculated after the geospatial analysis: the very high GPZ occupies about 19.70% of the study area, high 20.30%, moderate 20.0%, low 19.74%, and very low 20.31%.
Ordinary Kriging (OK) interpolation algorithm was used to generate the layer resistivity map, layer thickness map, depth to bedrock map, aquifer vulnerability map, and groundwater potential zone maps using the smart-map QGIS plugin. Smart-Map is a QGIS plugin that allows the generation of interpolated maps in the QGIS environment. Kriging is an unbiased linear interpolation technique that uses a weighted average of nearby samples to estimate unknown values in specific areas. It is deemed the best interpolation method for spatially varying data. For this study, the resistivity (VES) data was randomly distributed over a large area, and the sampling distance between one VES data and the other ranged from 0.5km to 10km.
This study evaluated groundwater parameters in the study area based on the geo-electric properties of the earth material. The results reveal that weathered/fractured basement and sandstone formations in the study area are substantial aquifer systems that host potable water. Data from some drilled boreholes across the study area were used to cross-validate the VES results against borehole log records. This knowledge aided in a better understanding of aquifer disposition, vulnerability, and potential consequences. The study's findings will provide a geo-database for groundwater potential zones in the Federal Capital Territory (Abuja), with significant implications for sustainable groundwater resource design and management.
“Hyperspectral Remote Sensing Data Analysis for Oil Palm and Nipa Palm Plantation Using EnMAP-Box open-source plugin on QGIS”
Jirawat Daraneesrisuk;
General Track
Spaceborne hyperspectral data can assist in estimating crop yields, predicting crop outcomes, and monitoring crops, which ultimately contributes to loss prevention and food security. Recently, EnMAP hyperspectral imagery has become available, starting from 2022. This study aims to analyze and classify oil palm and nipa palm plantations using hyperspectral images combined with machine learning algorithms. The Random Forest classifier, CatBoost classifier, and LightGBM classifier were utilized to automatically map the oil palm and nipa palm areas. The fully workflows of the hyperspectral imaging process were performed in EnMAP-Box plugin on QGIS software. The overall accuracy of three ML classfiers provides greater than 90% especially in oil palm and nipa palm plantations. Machine learning can find out the hidden information about spectral characteristics.
“Implementation and visualization of a digital twin system for urban noise prediction”
Haneul Yoo, Yooyeol Yim, Taiyoung Kim;
General Track
As the maturity of digital twin technology increases over time, there is an increasing demand to implement various services (especially those related to decision support) that utilize invisible phenomena/analysis information by visualizing various types of sensor data or the results of professional analysis/prediction together, from three-dimensional visualization using only buildings/terrain.
In particular, Korea aims to provide city-scale housing with a quality living environment, and one of the most complained about items is noise, and there is a strong desire to utilize noise prediction analysis to derive planning/design results in urban planning/design work.
In this presentation, we will introduce a digital twin system that combines 3D spatial information and noise predictive modeling using "OGC standard CityGML" and "open source mago 3DTiler and mago 3DTerrainer developed by Gaia3D" to support urban noise analysis and decision-making.
“Implementing ETL Processes with NDJSON for Spatial Data Integration”
Athitaya Phankhan, Chanakan Pangsapa;
General Track
Effective data management is essential for maximizing the value of spatial data in today’s data-driven landscape. This presentation provides an overview of implementing ETL (Extract, Transform, Load) processes using NDJSON (Newline Delimited JSON) for efficient spatial data integration. We will discuss the importance of robust data management, the benefits of using NDJSON for handling large and complex spatial datasets, and the practical applications of this approach.
Key topics include the steps of the ETL process with NDJSON, from extracting spatial data from various sources, transforming it into usable formats, to loading it into databases such as MongoDB and Elasticsearch. We will highlight the efficiency gains and flexibility provided by NDJSON in streaming and processing spatial data. Additionally, we will cover real-world use cases and best practices for optimizing spatial data integration with NDJSON.
Attendees will gain practical insights into the strategic and technical aspects of utilizing NDJSON in ETL processes, enabling them to implement effective spatial data integration within their organizations.
“Introducing Re:Earth Visualizer - No-Code 3D WebGIS Powered by Cesium -”
Hinako Iseki;
General Track
In this session, we will introduce Re:Earth Visualizer, a 3D WebGIS platform, covering its overview, features, technical components, and use cases.
Re:Earth Visualizer is open-source tool that allows non-engineers to visualize data on maps without coding and publish it on the Web. It also includes a plugin system that enables users to develop and add custom functions like QGIS.
We will explain the system architecture, use of advanced web technologies, and integration with CesiumJS. Finally, we will show several use cases demonstrating Re:Earth applications.
“Introduction to istSOS4 and SensorThings API”
Massimiliano Cannata;
Workshop Proposals
istSOS (http://istsos.org) is a software that has been designed to support sensor data management, from collection to management and quality assessment to dissemination using OGC and ISO standard formats. Following the evolution of software libraries, hardware technologies and IoT wide adoption, istSOS has been reimplemented in its version 4: named “Things”. Taking its tradition of being a Python implementation OGC compliant it takes advantage of latest solutions to support the Sensor Things API (STA) specification.
At the end of the workshop participants will understand the principles of the istSOS4 and of the STA standard; will be able to setup an istSOS4 STA service and will learn how to interact with the service both as a consumer or producer, using supplementary interfaces or pure python code.
“Iteration-free methods for Earth observation data time-series reconstruction”
Davide Consoli;
General Track
Clouds, atmospheric disturbances, and sensor failures influence the quality of Earth observation (EO) data and satellite images, in particular. Many modeling techniques and statistical analysis, to be applied to EO data, require the detection and removal of such aberrations. However, the data gap created after removing the involved pixels needs to be imputed with numerical values that resemble the expected noncorrupted ones. Several imputation, or gap-filling, methods available in literature are based on time-series reconstruction, working only on the temporal dimension of each pixel to input the missing values. Such methods, compared to alternative ones that also consider spatial neighbor pixels or data fusion with other sensors, have the advantage of maintaining the same spatial resolution and spectral consistency in the imputed data.
In contrast with methods that only work with a local temporal window, some of these methods take advantage of the whole time series of each pixel to reconstruct each missing value, allowing the full reconstruction of each gappy time-series. Nevertheless, such methods, like most recent image propagation or linear interpolation, often require an iterative search of available values along the time-series. When the time-series is composed of several samples and/or the involved number of pixels is large, the application of such methods leads to prohibitive computational costs.
We present in this work a computational framework based on discrete convolution that numerically approximates such methods and does not require iterating over the time-series to be applied [1]. In addition, the framework flexibility allows the application of different time-series reconstruction methods by only adapting the convolution kernel. The framework has been used to reconstruct the PetaByte scale Landsat Analysis Ready Data (ARD) collection provided by the Global Land Analysis and Discovery team (GLAD) [2]. New research fronts include the extension of the method to data-fusion approaches that combine time-series of multiple sensors to maintain the highest spatial resolution while also using the temporal information provided by all the sensors. The code, developed in Python with a C++ backend to guarantee usability and high computational efficiency, is openly available at https://github.com/openlandmap/scikit-map.
[1] Consoli, Davide & Parente, Leandro & Simoes, Rolf & Murat, & Tian, Xuemeng & Witjes, Martijn & Sloat, Lindsey & Hengl, Tomislav. (2024). A computational framework for processing time-series of Earth Observation data based on discrete convolution: global-scale historical Landsat cloud-free aggregates at 30 m spatial resolution. 10.21203/rs.3.rs-4465582/v1.
[2] Potapov, Peter & Hansen, Matthew & Kommareddy, Anil & Kommareddy, Anil & Turubanova, Svetlana & Pickens, Amy & Adusei, Bernard & Tyukavina, Alexandra & Ying, Qing. (2020). Landsat Analysis Ready Data for Global Land Cover and Land Cover Change Mapping. Remote Sensing. 12. 426. 10.3390/rs12030426.
“JSON Style Map: Enhancing Flexibility and Efficiency in Map Data Visualization”
PEERANAT PRASONGSUK, sattawat arab, Arissara Sompita;
General Track
In today's rapidly evolving field of map data visualization, the use of vector tiles is increasingly prevalent. Vector tiles offer flexible and efficient data display, but they require JSON Style data to define their visual representation. JSON Style plays a crucial role in formatting data, ensuring that presentations are both diverse and user-friendly.
“kari-sdm: Advanced Species Distribution Modeling using PyTorch and scikit-learn”
Lee, Jeongho, Byeong-Hyeok Yu, Chunghyeon Oh, Soodong Lee, Cho Bonggyo;
General Track
Species Distribution Modeling (SDM) is a statistical methodology used to predict the spatial and temporal distribution of species based on environmental conditions that are conducive to their survival and reproduction. This modeling approach leverages spatially explicit species occurrence records alongside various environmental covariates, including climate, terrain, and land cover, as input variables, with the aim of quantifying and mapping species-environment interactions. SDM has become a critical tool in ecological research and conservation biology for understanding and predicting species distribution patterns. A range of machine learning and deep learning techniques can be employed in SDM, such as Logistic Regression (LR), Random Forest (RF), Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Generative Adversarial Network-CNN (GAN-CNN). Despite the availability of these techniques, there is a lack of a comprehensive application that integrates these algorithms for species distribution modeling. To address this gap, this paper introduces a new tool, kari-sdm, which enables users to perform SDM utilizing a variety of techniques. Kari-sdm supports LR, RF, MLP, CNN, and GAN-CNN algorithms, all based on open-source frameworks PyTorch and scikit-learn. Additionally, it facilitates all necessary preprocessing steps, from data collection, cleaning, transformation, spatial preprocessing, and environmental variable selection, to data splitting. The tool also provides functions for model evaluation, result visualization, and cross-validation. The primary goal of kari-sdm is to assist ecologists in modeling species distributions, interpreting results, and developing informed conservation and management strategies.
“Land Use Detection Using Artificial Intelligence”
Amritesh Hiras, Anuj Sharad Mankumare, Akshith Mynampati, D ARUNA PRIYA;
General Track
Automating land use surveys in rural areas using advanced AI techniques can significantly enhance the efficiency and accuracy of identifying various land features. This project focuses on utilizing the YOLOv8 framework for land use detection through image segmentation and object detection.
Traditional land use surveys in rural areas are time-consuming and often prone to inaccuracies due to manual methods. Leveraging artificial intelligence, particularly deep learning models, presents a promising solution to streamline this process and improve data reliability. The project addresses the challenge of automating land feature identification, which includes detecting houses, rivers, roads, and vegetation from visual data captured by satellite/drone. Accurate identification of these features is crucial for effective rural planning and development, as it helps in resource allocation and infrastructure development. The limitations of conventional methods, such as the need for extensive human labor and susceptibility to human error, further highlight the necessity for innovative solutions like AI-driven land use surveys.
The primary aim of this study is to develop and train an AI model capable of accurately detecting and segmenting various land features in rural landscapes. By doing so, the project seeks to demonstrate the applicability of AI in enhancing rural development planning and management. The specific objectives include creating a reliable dataset of annotated aerial images, optimizing a deep learning model for high accuracy, and evaluating the model's performance across different types of land features. Ultimately, the project aims to provide a scalable and efficient tool that can assist policymakers, researchers, and rural development planners in making informed decisions.
The methodology involved several key steps to ensure the robustness and accuracy of the AI model. First, a diverse dataset of aerial images was collected, encompassing various rural landscapes with distinct features such as houses, rivers, roads, and farms/vegetation. These images were meticulously annotated using specialized tools to create ground truth data for training and validation. Data augmentation techniques, including rotations, flips, and color adjustments, were employed to expand the dataset and improve the model's generalization capabilities.
The YOLOv8 model was selected for its state-of-the-art performance in object detection and segmentation tasks. YOLOv8's architecture is well-suited for real-time applications due to its balance between accuracy and speed. The model was trained using the annotated dataset, with hyperparameters optimized to enhance its detection and segmentation performance. Training was conducted on a high-performance computing setup, leveraging GPU acceleration to expedite the process.
The results demonstrated the model's high precision in detecting and segmenting land features. The YOLOv8 model achieved notable accuracy metrics across various classes. The segmentation masks generated by the model closely matched the ground truth annotations, indicating its effectiveness in distinguishing different land features.
The findings of this study underscore the potential of AI in transforming rural development practices. The successful application of the YOLOv8 model for land use detection highlights its capability to deliver precise and actionable insights. The practical implications of this project are significant, offering a scalable solution for land survey automation, which can greatly assist policymakers and rural planners. The integration of such AI-driven methodologies can lead to more informed decision-making, efficient resource allocation, and ultimately, the betterment of rural communities.
The study also highlights several challenges and limitations encountered during the project. Data collection in rural areas can be logistically challenging, often requiring collaboration with local authorities and stakeholders. Ensuring the diversity and quality of the dataset is crucial, as biased or insufficient data can affect the model's performance. Additionally, the model's accuracy is dependent on the quality of annotations, which requires meticulous effort and expertise.
Despite these challenges, the project demonstrates that AI can significantly enhance the accuracy and efficiency of land use surveys. The use of deep learning models like YOLOv8 can reduce the reliance on manual methods, providing a more reliable and scalable solution. However, continuous efforts are needed to improve the dataset, address potential biases, and refine the model to handle more complex scenarios.
In conclusion, this project not only advances the field of AI in rural development but also sets a precedent for future studies aiming to leverage AI for similar applications. The integration of AI in land use surveys can revolutionize the way rural areas are planned and developed, leading to more sustainable and efficient outcomes. The success of this project inspires further research and development in AI-driven solutions for rural development, with the potential to make a lasting positive impact on rural communities worldwide.
“Land Use Land Cover Classification Automation Development using Free and Open-Source Software”
Thantham Khamyai;
General Track
It is crucial for enhancing the efficiency and accessibility of land use and land cover monitoring, supporting informed decision-making in urban planning, environmental management, and sustainable development across diverse geographical contexts. This aims to streamline the process of satellite data acquisition, preprocessing, and seasonal LULC classification through the integration of artificial intelligence (AI) models.
The proposed system will consist of two main components: (1) an automated satellite data fetching and preprocessing module, and (2) an AI-driven LULC classification module. The first component will leverage open-source tools to access and prepare satellite imagery from freely available sources, such as Landsat and Sentinel missions. This module will handle tasks including data download, atmospheric correction, cloud masking, and image compositing.
The second component will employ state-of-the-art machine learning algorithms, particularly deep learning models, to perform seasonal LULC classification. The system will be trained on diverse datasets to recognize and categorize various land cover types across different seasons, accounting for temporal variations in vegetation, urban expansion, and other dynamic landscape features.
By automating these processes, the proposed system aims to significantly reduce the time and expertise required for LULC analysis, making it more accessible to researchers, urban planners, and environmental managers. The use of free and open-source software ensures that the developed tools will be widely available and customizable for different geographical contexts and research needs.
This contributes to the advancement of remote sensing applications and supports informed decision-making in land management, urban planning, and environmental conservation efforts.
“LEVERAGING GEOSPATIAL DATA FOR TRACKING WATER FROM SPACE USING PYTHON PROGRAMMING”
J. Indu;
Keynote Talk
Water does not flow according to geographical boundaries but it follows
elevation. Inland waters from rivers and lakes present crucial natural
resources playing an indispensable role in the global hydrological cycle.
Still, their conventional monitoring is constrained by poor spatial
coverage. Though satellites help improve coverage, the hydraulic properties
of rivers often change at a rate faster than the temporal sampling of
satellites. Through this talk, two novel web applications are introduced
for rivers and lakes built using geospatial datasets and python. While the
first shall seamlessly extract time series of water surface area for
rivers, lakes and reservoirs from Sentinel-1 VV polarized SAR data. The
second application shall integrate dynamic lake water extents for improving
lake water surface temperatures, thereby challenging conventional norms.
“Leveraging spatial autocorrelation information of remotely sensed evapotranspiration for mitigating the impact of data uncertainty on hydrological modeling”
Yan He;
Academic Track (Oral)
Global remotely sensed evapotranspiration (RS-ET) products are increasingly pivotal in enhancing the accuracy and scope of hydrological modeling, particularly in regions where traditional ground-based streamflow data are sparse or non-existent. These products play a pivotal role in understanding the dynamics of the climate-soil-vegetation system, where evapotranspiration constitutes a substantial portion of water loss following precipitation events. Their extensive spatial coverage and accessibility have significantly expanded the capability to predict hydrological dynamics in ungauged basins, offering insights that were previously inaccessible through in-situ observations alone.
Despite their benefits, RS-ET products are tempered by inherent uncertainties, primarily stemming from biases that vary across datasets and geographical regions. These biases manifest as either overestimation or underestimation compared to ground truth measurements, posing challenges for the accurate calibration of hydrological models. Traditional approaches in hydrological modeling commonly utilize absolute ET values directly derived from RS-ET products for model calibration, without accounting for potential biases. However, the reliability of such direct calibrations is contingent upon the quality and accuracy of the RS-ET data, which remains uncertain in many cases.
To address these challenges, this study shifts the focus from absolute ET values to utilizing spatial structural information embedded within RS-ET data, particularly emphasizing spatial autocorrelation, which refers to the tendency of ET values at nearby locations to exhibit similarities. Employing the local Moran's I index, a spatially weighted autocorrelation statistic that is insensitive to biases, we capture the spatial structure of ET data across sub-basins. Additionally, a composite Kling-Gupta Efficiency (KGE) metric, integrating absolute ET values and spatial autocorrelation information in a weighted manner, is employed for calibrating hydrological models.
Three calibration schemes are thus designed to analyze the effectiveness of spatial autocorrelation in hydrological modeling: one focusing solely on absolute ET values, another solely on spatial autocorrelation, and a combined approach. Testing these schemes for hydrological modeling with four RS-ET products in the Meichuan basin—MOD16, GLASS, and SSEBop with large biases, and PMLV2 with minimal bias—the study demonstrates varying effectiveness across these schemes.
For RS-ET products with substantial biases, hydrological modeling using spatial autocorrelation proved to be the optimal solution. It achieved a higher KGE and lower Percent Bias (PBIAS) on simulated streamflow compared to using solely the absolute ET value or the combined approach. Conversely, for RS-ET products with minimal biases, hydrological models calibrated using both the combined approach were considered the preferred solutions. This approach can result in a high KGE, similar to that obtained from spatial autocorrelation information alone, while maintaining a reasonable PBIAS. Therefore, we recommend calibrating hydrological models using both absolute ET values and spatial autocorrelation information in regions where ground ET observations are available. This approach enhances the robustness and reliability of hydrological predictions, mitigating the influence of biases inherent in RS-ET products.
In contrast, in scenarios where the quality of RS-ET products is unknown, we suggest calibrating using only spatial autocorrelation information, thereby circumventing potential biases and improving model accuracy under such circumstances. Moreover, methodologically, the study contributes by demonstrating the efficacy of the local Moran's I index in capturing the spatial structure of ET data within hydrological sub-basins. This geostatistical measure not only quantifies spatial autocorrelation but also identifies clusters and patterns of ET values, thereby enriching our understanding of spatial variability in hydrological processes.
Furthermore, the comprehensive analysis of the composite KGE index underscores the significant contribution of spatial autocorrelation information to hydrological modeling, surpassing the influence of absolute ET values in enhancing model performance. In conclusion, the spatial autocorrelation-based approach presented in this study represents a significant advancement in the application of global RS-ET products for hydrological modeling. By leveraging spatial structural information and mitigating biases inherent in RS-ET data, this approach not only improves the accuracy of hydrological predictions but also enhances the practical utility of RS-ET products in diverse hydrological contexts. Future research directions may explore additional spatial statistical techniques and incorporate a broader array of RS-ET datasets to further refine and validate these findings across different geographical settings and hydrological conditions.
“LLM (Large Language Model) geospatial python for geospatial analysis in GDAL native environment”
Lawrence Xiao;
Workshop Proposals
We want to advance geospatial data science through first providing an optimal dev ops layer for anyone to build geospatial models/code in a GDAL native environment while being supported by co-pilot or GPT like Large Language Model that is trained/fine tuned on GDAL and geospatial python.
With our proprietary technology being built through entirely serverless architecture, we can significantly reduce costs and increase accessibility to powerful GIS dev ops infrastructure.
“Localization of FOSS4G Tools and Building an Open Knowledge Platform in Japanese University Education”
Shiori Uehara, Aki Sato;
Poster Presentations
Furuhashi Lab has been working on OSM mapping and Mapathon as YouthMappers AGU under the theme of "Participatory Mapping and Social Contribution". Here is a look back at our specific activities in 2024. Three months from March to June, we participated in the OSM Validation training of UN MAPS. We achieved the promotion of all 12 students in our lab to intermediate OSM Mappers. Based on the knowledge learned, we also created a graphic recording about JOSM Validation and published it on GitHub. In April, we participated in "International Humanitarian Mapathon 2024" and competed with universities and organizations from more than 5 countries including USC and UCLA. In June, we held Wheelmap's Mapathon to learn how we can use maps to contribute to society. We are also working on the translation of "Open Mapping towards Sustainable Development Goals" as part of our year-round activities to promote the activities of YouthMappers. We are also planning to participate in other Mapathons and Hakkathons in the future.
Throughout our year-long activities, we have been faced with the challenge that there is a large gap in understanding depending on the amount of knowledge and language level of individuals. As newcomers to the geospatial information industry, we had little prerequisite knowledge and were unfamiliar with tools such as the QGIS manual, GDAL, and JOSM, which are commonplace for advanced mappers. The most difficult thing for us Japanese was that the manuals for understanding these tools were mostly in English, and we found ourselves in a situation where we could not understand them even if we read them because of their many technical terms. It was not easy to keep the manuals close at hand and look at the actual screens and operate them at the same time.
For this reason, this presentation will introduce the usefulness of translation and visualization for problems such as unfamiliarity with computer operation, inability to understand manuals due to lack of knowledge in the field, and resistance to learning in a language other than one's native tongue. In particular, we recognize that overcoming language barriers is of paramount importance. As examples, we will discuss the translation of the QGIS manual and GDAL, and the creation of a graticule for the JOSM Validation Training. We will then publish those deliverables on GitHub to create an open knowledge platform.
First of all, in the rapidly evolving field of geospatial technology, access to comprehensive and understandable documentation is crucial for both new and experienced users. However, language barriers often limit access to valuable resources. To bridge this gap, students from the Furuhashi Lab at Aoyama Gakuin University's "Applied Spatial Information Science III" course are working to localize technical documents for FOSS4G (Free and Open Source Software for Geospatial) tools such as QGIS and GDAL. These tools are widely used for geospatial data manipulation, analysis, and visualization, but much of their documentation is predominantly available in English. By translating these documents into Japanese, we aim to increase accessibility for Japanese-speaking users and contribute to a deeper understanding of geospatial technologies.
Our approach in the course begins with understanding the functionalities of QGIS and GDAL, followed by practical exercises to familiarize participants with basic operations. This practical experience forms the foundation for translating technical documents, helping participants effectively understand the content. We use tools such as Transifex for collaborative translation efforts, ensuring consistency and accuracy across documents. However, the current complexity of registering an account on Transifex poses a challenge. To address this, we have created a Markdown-based "QGIS Documentation Japanese Translation Manual" within a GitHub repository, where students document the steps and share insights, including potential pitfalls. This helps in facilitating collaborative information sharing.
The content of the guide follows the format outlined by the Japan Translation Federation (JTF)’s “Translation Guidelines,” which is essential for the success of translation projects involving open data. By building an open knowledge platform using GitHub, both users and instructors can better understand the tendencies that beginners may encounter with these tools. The FAQ and other resources on this platform allow participants to easily create, edit, and publish markdown documents, helping them mentally simulate the actual working environment. Furthermore, gaining this experience helps foster a culture of open knowledge sharing within the academic community, where students can exchange the skills needed to effectively manage digital documentation.
Regarding GDAL, we focus on translating .po files within GitHub.
This project demonstrates that localization and open knowledge platforms can bridge the gap between technology and language, serving as a gateway to fostering geospatial literacy. We aim to share this project at the FOSS4G International Conference, contributing to the geospatial community and promoting more accessible geospatial information literacy.
Second, Furuhashi Lab continues to input data into OpenStreetMap for emergency rescue efforts and as a contribution to areas without maps.
Creating and providing accurate maps requires not only proper instruction but also mastery of the editing tools used. In addition, using JOSM is an efficient way to input and validate huge amounts of data in OSM without errors.
JOSM (Java OpenStreetMap Editor) is an advanced OSM desktop editor, written in Java, that only works on Windows and Mac. And the printed manual is difficult for beginners to understand, and they often have trouble even getting the tool to work in the first place.
On the other hand, Visual information has the advantage of overcoming language barriers and differences due to prerequisite knowledge and can convey information intuitively. Furuhashi Lab uses graphic recording method as a means to achieve this.
Twelve students from Furuhashi Lab participated in the "OSM Data Validation Training Proposal" sponsored by UN Mappers over a three-month period from March to June. However, students who were not used to working with computers had a hard time just installing the system, and most of the students who participated actually faced problems. The graphic recording was created in such a situation. The graphic recording created in this situation did not capture the essence of the lecture, so we had to redo it. Afterwards, the video was reviewed and newly redrawn, which is now available to the whole world on GitHub.
Using the example of the graphic recording at JOSM Validation, I will introduce the usefulness of visualization in Japanese university education.
“Mapping land suitability for sugarcane crop with fuzzy AHP and multi-criteria evaluation”
Piyanan Pipatsitee;
Academic Track (Oral)
Mapping land suitability is a critical approach for identifying appropriate land use for site selection and land-use planning. However, climate change exacerbates water shortages and droughts, significantly affecting land suitability and resulting in decreased crop yields, especially for sugarcane. While land suitability is typically evaluated based on multiple criteria such as soil properties, topography, climate, and socioeconomic factors, it is essential to incorporate drought conditions into land suitability mapping to mitigate climate change influences on crop yields. Therefore, this study aimed to map sugarcane land suitability using fuzzy AHP and multi-criteria evaluation approaches in the Northeast region of Thailand.
The study selected six significant criteria for sugarcane land suitability mapping: the ETDI as an agricultural drought index, slope, soil texture, distance from the river, distance from the road, and distance from the sugar mill. The ETDI was assessed by calculating the difference between spatial Potential Evapotranspiration (PET) and actual Evapotranspiration (AET). Spatial PET was analyzed using a PET estimation model based on integrated GNSS-derived Precipitable Water Vapor, processed with goGPS open-source software, along with the MODIS land surface temperature product. Concurrently, the spatial AET was derived from the SEBAL model, utilizing GRASS GIS software.
Subsequently, land suitability for sugarcane cultivation was evaluated by integrating fuzzy AHP and multi-criteria evaluation approaches. The results indicated that two primary factors affected sugarcane cultivation: the ETDI and distance from the river. The ETDI was the most significant factor, with an average weight of 0.66, while the distance from the river had an average weight of 0.34. Other factors, including slope, soil texture, distance from the road, and distance from the sugar mill, did not influence land suitability. The spatial distribution of these factors was consistent throughout the study area.
Suitable areas for sugarcane were predominantly found in the moderately suitable class (S2; 49.6%), followed by marginally suitable (S3; 36.0%) and highly suitable (S1; 11.2%). Actual sugarcane cultivation areas were mainly in the S3 class (49.0%), followed by S2 (43.2%) and S1 (6.7%). S3 class areas were concentrated in Wang Sam Mo district, Udon Thani province (129 km²), with a sugarcane yield of approximately 60.6 tons/ha. S2 class areas, primarily in Phu Khiao district, Chaiyaphum province (178 km²), yielded about 62.5 tons/ha, while S1 class areas in Phimai district, Nakhon Ratchasima province (30 km²), achieved a higher yield of 63.6 tons/ha.
S2 class areas could potentially be enhanced through irrigation systems and small ponds to mitigate drought risks. Limiting the distance from the river to within 2 km could increase sugarcane yields and promote areas to the S1 class, expanding S1 areas by 2.7 times and raising yields by approximately 1.1 tons/ha (1.8% of S2 yield). Areas classified as S1 exhibit significant potential for sugarcane cultivation expansion due to their underutilization. A total area of 6,519 km² within the S1 class was analyzed for suitability. Nakhon Ratchasima province has the greatest potential (2,272 km², 35%), followed by Khon Kaen (725 km², 11%), Chaiyaphum (592 km², 9%), Udon Thani (519 km², 8%), and Surin (441 km², 7%).
Encouraging a shift from currently cultivated crops (rice, corn, and cassava) to sugarcane in these potential areas is essential for optimal resource utilization. However, farmers often continue rice cultivation due to its traditional significance and shorter growth period, providing quicker income. Government policies should support participatory knowledge transfer on sugarcane cultivation, ensure price guarantees, and facilitate access to credit. Additionally, the high price of sugarcane could incentivize farmers to expand sugarcane cultivation to meet increasing domestic and export demand. Further research on a larger scale, covering the entire country, is necessary to enhance the accuracy of land suitability maps in addressing challenges posed by global climate change.
“Mapping Urban Dynamics: The Role of Data Analysis in Shaping Sustainable Cities”
Sarawut ninsawat;
General Track
The integration of vast data analysis and AI technologies in urban planning represents a significant advancement in managing the complexities of modern cities. Through this presentation, it show the vast potential of these technologies to enhance decision-making, optimize resource allocation, and improve urban sustainability. By understanding and applying these technologies, future urban planners can develop smarter, more efficient, and environmentally friendly cities. The practical applications discussed, including traffic injury risk assessment, human mobility analysis, and carbon emission estimation, demonstrate the tangible benefits of leveraging Big Earth Data and AI in urban planning.
“MEASURING COMPACTNESS IN ELECTORAL DELIMITATION: AN OPEN-SOURCE GIS APPLICATION”
Shailesh Chaure;
Poster Presentations
Electoral delimitation is round the corner in India. Statutory provisions of Delimitation Act prescribe geographical compactness as the foremost criterion for delimitation. However, the Guidelines and Methodology of delimitation do not define any methodology for ensuring, evaluating and measuring compactness, and effective implementation of the criterion during delimitation.
Compactness ensures better connectivity, communication, public convenience, accessibility and easy movement for the stakeholder population. Delimitation authorities across the world employ varied measures of compactness for evaluation of alternative plans. These are mathematical functions which quantify the irregularities in the shapes and population distribution in the constituencies. These have been acknowledged as a significant check on arbitrariness in the process of redistricting.
An open source geospatial tool has been developed in QGIS 3.16 for computation and evaluation of compactness of selected representative pre and post delimitation assembly constituencies (ACs) of Rajasthan. Four indices - Gibbs, Polsby and Popper (Cpp), relative moment of inertia and normalized mass moment of inertia (NMMI)) - have been identified which model the dispersion, boundary irregularity and population distribution aspects of compactness, their performance has been compared and an appropriate combination of measures has been proposed.
The input spatial data includes multi-level administrative maps of the ACs joined with population attributes, and pre and post delimitation AC boundary vector files. A QGIS Python script has been developed which calculates the point and polygon features required for various measurements of selected indices, and returns the numerical values of the indices in ASCII text files.
The results closely correspond to the visual expression of compactness of the ACs. The open source tool can be employed for delineating geographically compact constituencies. Alternative plans of electoral boundaries can be evaluated for compliance to the prescribed guidelines, effectively reducing arbitrariness in the final plan and enhancing transparency and objectivity in the process in India.
“MSpace.E: Advanced Urban Environment Simulation Platform”
NGUYEN VAN THIEN, Hirofumi Hayashi, iizukatoshiaki, Hirosawa Kunihiko;
General Track
“toeng.net”, which we announced at FOSS4G-ASIA 2023 Seoul, will be launched as “MSpace.E”.
MSpace.E is a comprehensive urban environment analysis platform using 3D city models. This platform integrates simulations of shadows, building surface shadows, noise, and wind, providing an approach to urban planning and environmental assessment.
Key features include 3D visualization using the Re:earth platform, environmental analysis integrated with user-provided construction data in IFC, FBX, GLB, and 3D Tiles formats, and newly added functionality for group management and sharing of analysis results. Users can analyze the environmental factors within a selected area, and by utilizing the PLATEAU 3D city model, more accurate analysis is possible by taking into account existing building structures.
In the case of shadow analysis, users can select analysis options, specify the range, set parameters, and receive the results in the CMZL file format. In addition, users can upload their own construction data and visualize it in 3D on Re:earth. Group management and result sharing functions make team collaboration easier. We also discuss a comparison of the implementation and performance of MSpace.E with the toeng.net prototype published last year. Application areas include urban planning for optimal building placement and public space design, environmental impact assessment of architectural projects, and energy-saving strategy planning at the urban scale.
MSpace.E is a powerful platform for multi-faceted analysis and visualization of complex urban environments. It enables comprehensive urban environment simulation, strongly supporting decision-making for sustainable urban development.
Join us at mspace.apptec.co.jp!
“Multi-Class Oil Palm Tree Detection from UAV Imagery Using Deep Learning”
Aakash Thapa, Teerayut Horanont;
General Track
Southeast Asia (SEA) region leads the world in palm oil production, with Indonesia, Malaysia, and Thailand collectively contributing over 88% of the global production. However, the tropical climate in the SEA region resulted in oil palm trees vulnerable to various diseases such as Fatal Yellowing (FY) and Ganoderma boninense. To keep track of productivity, it is crucial to monitor the varying conditions of oil palm trees—such as healthy, dead, yellow, and small—and apply effective pruning techniques to cure affected trees. Manual approach for oil palm tree detection is expensive, tedious, and prone to inaccuracies. Thus, our study is focused to automate the detection of oil palm trees and their states using a deep learning (DL) algorithm on unmanned aerial vehicle (UAV) imagery. We use YOLOv8, one of the latest open-source models, on the publicly available UAV dataset, named MOPAD, containing training and validation sets. The performance of the model is further compared with other state-of-the-art object detectors. In addition, a prototype web application is developed to demonstrate the robustness of the model in adverse real-world conditions and for potential deployment.
“Mysuru 2034: An Integrated Geoinformatics Approach for Real Estate Valuation and Urban Growth”
CHANDAN M C, Shreyanka M, Nikitha K, Tejashvi Swamy, Pramath Rathithara HP, Dr. Kul Vaibhav Sharma;
Academic Track (Oral)
Over recent years, Mysore, a district in Karnataka, India, has seen remarkable urban growth and infrastructural development, transforming its landscape significantly. This study examines how this urban expansion influences property values, using data from 2014 and 2024 to forecast property values for 2034 with a Random Forest regression model. We focus on 110 key locations, looking at factors such as closeness to the central business district, railway station, bus stand, and local amenities like schools and hospitals. By finding the strongest correlations between these elements, we establish a relationship between property values and these factors to predict future values.
Our findings highlight Mysore's vibrant economic growth and its potential for sustained progress. These insights are crucial for the real estate market, providing valuable information to make informed decisions about future property values amid ongoing urban development. By analyzing how urban growth impacts property values through sophisticated statistical models, this study sheds light on how infrastructural improvements and strategic locations drive real estate trends. The expected significant rise in property values by 2034 underscores Mysore's economic dynamism and its appeal as an emerging urban hub.
We conducted a thorough analysis of various factors affecting property values, focusing on proximity to essential services and transportation hubs. These elements significantly influence property desirability and accessibility. Our use of the Random Forest regression model enables accurate predictions of future property values by understanding complex relationships between these variables. The strong correlation between guideline values and market values provides a reliable basis for predicting future real estate trends. This correlation is essential for stakeholders, including developers, investors, and policymakers, as it supports strategic decision-making based on market projections.
The expected significant rise in property values indicates that Mysore is poised for considerable growth, driven by strategic developments and improved infrastructure. By understanding these trends, stakeholders can make informed decisions to capitalize on Mysore’s ongoing urban expansion, ensuring that investments and development strategies align with the city's projected economic vitality and growth potential. Our analysis highlights that proximity to the central business district, bus stops, and railway stations are key determinants of property values, greatly influencing market prices. We project a significant increase in property values, estimating a 118% rise by 2034.
To visualize these future values, we employ Voronoi polygons, which offer a clear spatial representation of the predicted property value distribution. This approach provides stakeholders, including developers, investors, and policymakers, with valuable insights into future market trends. By understanding the impact of these location factors, they can make informed decisions regarding investments and development strategies. The anticipated rise in property values underscores the ongoing urban development and economic growth in Mysore, highlighting its potential as a thriving urban center.
In summary, this study provides an in-depth analysis of the relationship between urban growth and property values in Mysore. By employing advanced regression models and detailed location-based data, we have developed a robust forecast for property values in 2034. Our findings indicate a significant projected increase in property values, highlighting Mysore's continuous development and potential for future growth. These insights are essential for the real estate market, offering valuable guidance for future investments and development strategies in Mysore. The study emphasizes the impact of key factors such as proximity to the central business district, bus stops, and railway stations on property values. By understanding these dynamics, stakeholders, including developers, investors, and policymakers, can make informed decisions to navigate the evolving real estate landscape. The anticipated rise in property values underscores Mysore’s economic vitality and its promise as a thriving urban center, driven by strategic infrastructure development.
Keywords: Mysore, Urban growth, Property values, Regression model, Infrastructure, Stakeholders
“New Way Using H3 to Manage GIS Data”
Tanaporn Songprayad, Siriwimon Saotongthong, Siriya Saenkhom-or;
General Track
Managing Geographic Information Systems (GIS) data with H3 (H3Geo) is an efficient and modern method for handling and analyzing geographic data. H3 uses a hexagonal grid system that offers special features, allowing data to be stored at resolution levels from 0 to 14, which helps in dividing and storing data effectively.
“On the performance of distributed rendering system for 3DWebGIS application on ultra-high-resolution display”
Tomohiro KAWANABE;
Academic Track (Oral)
Introduction
With the spread of IoT and the increasing resolution of observation sensors, the total amount of geospatial information data is increasing exponentially daily. On the other hand, the increase in resolution of display devices used to analyze and visualize these data is reaching its limit due to various physical constraints. The maximum resolution of commercially available display devices is 8K; 4K or 5K is considered the upper limit for desktop use.
Using the OS's multi-display function or a tiled display driver provided by the GPU manufacturer, it is possible to create a display environment with an even larger area and higher resolution. However, the middleware provided by the GPU manufacturer currently has a maximum resolution limitation of 16K [1], which is the maximum resolution that can be achieved on a single PC.
However, even if these mechanisms are used to create an ultra-high-resolution display environment, it is only possible to render data within the web browser's heap memory limit in the case of WebGIS applications. For example, the 3DWebGIS viewer provided by the Tokyo Digital Twin Project [2] cannot render 3DTiles [3] building data for all 23 wards of Tokyo at once (textured building data is used for areas provided with texture).
In this paper, we introduce ChOWDER, a web-based tiled display driver that enables distributed rendering of 3DWebGIS content across multiple web browsers, as a solution to the above problems and report the results of memory load balancing experiments using ChOWDER for distributed rendering.
Proposal of a distributed rendering method for 3DWebGIS
One possible solution to the above problems is to distribute the display of one WebGIS content across multiple PCs (multiple web browsers). This makes it possible to display a WebGIS at a resolution that exceeds the upper limit of a single PC (web browser) and distributes the memory load required to display the content across each PC (web browser).
The scalable display system ChOWDER[4][5], jointly developed by RIKEN Center for Computational Science and Kyushu University, is an open-source tiled display driver that can create an ultra-high-resolution pixel space by arranging multiple displays that display a web browser in full-screen mode in tiles. It also supports distributed rendering of 3DWebGIS.
This function uses iTowns[6], an open-source 3DWebGIS, as middleware. iTowns uses Three.js as a WebGL rendering library, and Three.js has an API that can offset the view frustum[7].
The view frustum must be split appropriately to split and display 3D content on multiple display devices. ChOWDER uses the view frustum offset API of Three.js to split a single iTowns content into multiple view frustums, enabling multiple web browsers to split and render 3DWebGIS content [8].
However, at the time of the previous report [8], when iTowns executed a 3DTiles load command, it loaded all the data without judging whether it was inside or outside the view frustum range, so distributed rendering did not improve memory utilization efficiency. Since then, the 3DTiles load process was improved in iTowns Release 2.42.0; in this paper, we measured the amount of heap memory consumed by each browser when iTowns content was distributed and rendered using ChOWDER on multiple web browsers and confirmed the memory load distribution achieved by this method.
Experimental procedures and results
The experimental data used was the textured building data for Chiyoda, Minato, and Chuo wards in Tokyo, from the 3DTiles data distributed by the PLATEAU project [9] of the Ministry of Land, Infrastructure, Transport and Tourism of Japan.
The experiment first displayed the 3DTiles building data for the above three wards in full screen on a single 4K resolution display using iTowns on ChOWDER. The heap memory size of the web browser at this time was 268MB.
Next, the same content was displayed on a ChOWDER distributed display consisting of four 4K displays arranged in two horizontal and two vertical rows. Each display had a full-screen web browser. The heap memory sizes of each web browser were 133MB, 188MB, 68.3MB, and 37.7MB.
Finally, we conducted an experiment using nine 4K displays arranged in three rows and three columns. The heap memory sizes of each web browser were 66.8MB, 122MB, 140MB, 84.3MB, 87.2MB, 56.9MB, 41.2MB, 38.4MB, and 33.6MB.
From these experimental results, it can be said that distributed rendering of 3DWebGIS using ChOWDER achieves memory load balancing.
During distributed rendering, the heap memory size of each web browser is different because the amount of 3DTiles data contained in each responsible drawing area is different. Also, the total heap memory size of all browsers is larger than when rendering in a single browser because iTowns loads 3DTiles data that is wider than its view frustum, and data loading in overlapping areas occurs during distributed rendering.
Future work and conclusion
In this experiment, we measured the web browser's heap memory, but did not measure GPU memory consumption. However, because 3DWebGIS uses WebGL for rendering, we believe that a more precise evaluation can be made by measuring GPU memory consumption as well.
In addition, since distributing rendering across more web browsers is expected to further distribute memory load, we plan to conduct experiments by increasing the number of distributed displays.
In this paper, we have shown the limitations of current 3DWebGIS when the data to be displayed increases, and proposed a distributed rendering method as a means to solve this problem, and introduced the view frustum offset API of Three.js, iTowns, a 3DWebGIS to which it can be applied, and ChOWDER, a web-based tiled display driver that incorporates them, as a means to realize this method. Furthermore, we have presented the results of an experiment that shows that memory load distribution is achieved by distributed rendering using these and demonstrated that this method is one solution to the increase in data to be displayed in 3DWebGIS.
References
[1] Limitations. About NVIDIA Mosaic. https://www.nvidia.com/content/Control-Panel-Help/vLatest/en-us/mergedProjects/nvwks/SLI_Mosaic_Mode.htm Accessed July 29, 2024.
[2] Tokyo Digital Twin Project. https://info.tokyo-digitaltwin.metro.tokyo.lg.jp/ Accessed July 29, 2024.
[3] 3DTiles. The open specification for 3D data. https://cesium.com/why-cesium/3d-tiles/ Accessed July 29, 2024.
[4] Kawanabe, T., Nonaka, J., Hatta, K., & Ono, K. (2018, September). ChOWDER: an adaptive tiled display wall driver for dynamic remote collaboration. In International Conference on Cooperative Design, Visualization and Engineering (pp. 11-15). Cham: Springer International Publishing.
[5] ChOWDER GitHub repository. https://github.com/SIPupstreamDesign/ChOWDER Accessed July 29, 2024.
[6] iTowns (in French). https://www.itowns-project.org/ Accessed July 29, 2024.
[7] three.js API Reference. https://threejs.org/docs/#api/en/cameras/PerspectiveCamera.setViewOffset Accessed July 29, 2024.
[8] Kawanabe, T., Hatta, K., & Ono, K. (2020, September). ChOWDER: A New Approach for Viewing 3D Web GIS on Ultra-High-Resolution Scalable Display. In 2020 IEEE International Conference on Cluster Computing (CLUSTER) (pp. 412-413). IEEE.
[9] Project PLATEAU portal site (in Japanese). https://www.geospatial.jp/ckan/dataset/plateau Accessed July 29, 2024.
“Open or Perish”
Cannata Massimiliano;
Keynote Talk
The research assessment is traditionally based on the evaluation of criteria such as the number of peer-reviewed publications, impact factor, and number or amount of grant funding. Unfortunately, this approach has been proved to deeply influenced the way of conducting research that focus on quantity rather then quality and not . To maximize the impact of research as a practical mean to address societal challenges, a new approach, named Open Science, has been endorsed worldwide by major funding agencies over the last decade as the new research pathway. Quality and impact, collaboration and sharing, diversity and equity, transparency and efficiency has become the new paradigms to be pursued.
To foster the adoption of the Open Science funding agencies are acting on two fronts: on one hand, by influencing policies and requiring the adoption of open science practices as a condition for funding access (Open Access, Open Data and Citizen Science), and on the other hand, by focusing on incentives and exploring new methods for evaluating scientific results.
It is clear that in the near future, the current “publish or perish” aphorism is shifting towards “open or perish” to describe the required work to succeed in an academic career. But how should a modern researcher act and comply with this new paradigm? It essential for her/him to understand the best practices that guarantee the recognition of her/his achievements by connecting the researcher, the publications, the software and the data. In this talk, an introduction of these best practices is addressed with the aim of sustain the Open Science adoption with particular reference to Open Software. Finally, new possible approaches envisioned for the evaluation of project proposals, career advancements and institutions assessment are presented and discussed.
“Open spatial data in Thailand Higher Education Context - Classroom to Daily Life”
Chomchanok Arunplod;
General Track
Open spatial data plays a transformative role in the landscape of higher education in Thailand, bridging the gap between theoretical learning and practical application in daily life. This keynote address will explore how open spatial data is being integrated into the higher education curriculum, emphasizing its significance in enhancing students' understanding of geography, urban planning, environmental management, and related disciplines.
Illustrating how open spatial data is increasingly influencing daily life in Thailand through community-driven mapping projects, public health initiatives, or sustainable development planning, open spatial data is empowering. The session will also address the challenges and opportunities in the widespread adoption of open spatial data within higher education, including issues of data quality and accessibility and the need for ongoing support and collaboration between academic institutions, government agencies, and the private sector. Ultimately, this session aims to inspire educators, students, and professionals to harness the power of open spatial data, transforming education and society at large in Thailand.
“Optimizing Photovoltaic Energy Potential Analysis through Economic Modeling and Open Source GIS Data Integration”
Changyeol Yun;
Poster Presentations
We define the terminology and calculation methods for the potential volume of photovoltaic (PV) energy across South Korea and derive the calculation and mapping results using various open GIS data and software. To estimate the theoretical potential of PV energy in South Korea, we divided the entire country into 100m x 100m grids and performed calculations for each grid. The solar irradiance for each grid was determined using GK-2A (GEO-KOMPSAT-2A) satellite imagery. To assess the feasibility of PV installation, spatial data from various GIS layers were applied to identify suitable areas. We then calculated the possible capacities and annual electricity production by applying the capacity factor of PV systems for each grid, resulting in the technical potential. We evaluated the economic viability by incorporating sociocultural regulations and Renewable Portfolio Standard (RPS) subsidy policies. The Levelized Cost of Energy (LCOE) was calculated for each grid and compared with the combined value of the System Marginal Price (SMP) and Renewable Energy Certificate (REC) to identify economically feasible areas, which were classified as market potential. The analysis utilized over 40 GIS layers, primarily sourced from national open data. Evaluating data suitability and extracting key parameters were the most challenging aspects of this process. This comprehensive approach, which integrates current governmental and municipal ordinances, technical performance indicators, and land-use factors, provides essential metrics for establishing future energy plans in South Korea.
“POWERING THE FUTURE GRID : INTELLIGENCE TRANSMISSION MONITORING WITH SAR SATELLITE IMAGERY AND LiDAR”
Kamolratn Chureesampant;
Keynote Talk
Enhancing the efficiency of the Electricity Generating Authority of Thailand (EGAT)’s transmission system through satellite imagery and LiDAR. With over 470 transmission lines spanning 25,000 km, expansion plans aim to provide nationwide coverage. LiDAR is used for aerial mapping to create an accurate initial database, while CCTV systems enable real-time surveillance. Satellite imagery, particularly SAR satellite imagery combined with InSAR and machine learning, ensures precise monitoring of encroachments. Integrated GIS application support right-of-way management, contributing to safe and efficient power distribution.
“Predictive Analysis of LULC Dynamics for Area Under Submergence and its Environmental Impacts for the Mekedatu Reservoir Project”
CHANDAN M C, Pooja K, Pratham Goudageri, Vickey Rajendra Hegade, Prithvi Raj Gowda S;
General Track
Reservoirs play a crucial role in global water resource management, hydroelectric power generation, and flood control. However, their construction often entails significant ecological and socio-economic impacts, necessitating thorough environmental assessments. The Mekedatu Reservoir Project, situated on the Cauvery River in the Ramanagar district of Karnataka, India, holds paramount significance. Aimed at supplying the Bengaluru Metropolitan Region and its surroundings with drinking water, the project also endeavors to generate 400 MW of renewable energy annually. Despite its benefits, the project comes with ecological costs, as approximately 5252.40 hectares of revenue, forest, and wildlife land will be submerged. This necessitates a detailed evaluation of its potential environmental consequences.
This study identifies a knowledge gap in the existing literature regarding the ecological implications of the Mekedatu Reservoir Project. It seeks to fill this void by forecasting land use and land cover (LULC) changes for the years 2000, 2010, and 2020 using the Random Forest method, and assessing the submergence area for different levels of the proposed reservoir. Catchment delineation is performed using the Soil and Water Assessment Tool (SWAT). Additionally, the Cellular Automaton-Markov Chain technique is employed to predict land use and land cover changes for the year 2030. Integrating these methodologies, the research provides a holistic understanding of the project's environmental footprint.
The land use and land cover analysis revealed significant shifts from 2000 to 2020, with forest cover decreasing from 71.54% to 60.71% and barren land increasing from 19.55% to 29.56%. The projected land use and land cover for 2030 shows further forest reduction to 58.28% and barren land increasing to 31.11%. These changes highlight a trend towards deforestation and land degradation, posing severe ecological threats. The submergence area at the proposed reservoir Full Reservoir Level is estimated to be 5252.4 hectares, distributed as 6.62% water, 19.55% barren land, 71.54% forest area, and 2.29% built-up area for the year 2000. The inundation of these areas will lead to significant biodiversity loss, affecting numerous plant and animal species.
In line with Sustainable Development Goals, which advocates for sustainable water management, this study emphasizes the importance of informed decision-making and sustainable development practices. The findings underscore the need for new ecologically sensitive areas and the establishment of wildlife corridors, conservation zones, and afforestation programs to mitigate the adverse impacts. Continuous environmental monitoring and research are essential to track biodiversity impacts and adjust conservation strategies accordingly.
Policy implications of this study suggest that due process of law, linked with the principle of natural justice, must be adhered to in ensuring environmental balance. Recommendations from the World Commission on Dams (WCD) highlight the need to reduce the negative impacts of dams by increasing the efficiency of existing assets and minimizing ecosystem impacts. Policymakers must understand the long-term ecological consequences of such mega projects and explore alternatives. Sustainable development models must be based on equality and natural justice.
Future research should focus on the socio-economic impacts of the Mekedatu Reservoir Project, particularly the displacement of local communities. This includes conducting detailed socio-economic assessments, inclusive resettlement planning, livelihood restoration programs, and initiatives to preserve cultural heritage. Continuous monitoring and long-term studies are crucial to ensure the well-being of resettled populations and to balance development with environmental and social sustainability.
In summary, this study advances the understanding of environmental impact assessment in reservoir projects, providing valuable insights for stakeholders and policymakers. It highlights the critical need for sustainable development practices that ensure equitable access to water resources while preserving environmental integrity.
Keywords: Environmental Impact Assessment, Reservoir Project, Machine Learning, Random Forest, Markov Chain, Cellular Automaton, Land Use Changes, Submergence Area, Sustainable Development Goals, Water Resource Management.
“PWAGIS QGIS Plugins Development : Lessons learn to Free Open Source Solutions for Geospatial”
Prasong Patheepphoemphong, Pongsakorn Udombua;
General Track
This talk will explore the transition from proprietary nature to PWAGIS QGIS plugins, focusing on the limitations experienced with previous solutions of proprietary software, the challenges encountered during development, and the innovative functionalities introduced in the new plugins. Attendees will gain insights into the practical aspects of moving from a legacy system to an open-source solution, highlighting both the obstacles and the opportunities this transition presents.
“Real-Time Monitoring and Positioning of Agricultural Tractors Using a Low-Cost GNSS and IoT Device”
Thanwamas Phasinam;
Academic Track (Oral)
This research aims to develop a low-cost GNSS receiver device for positioning agricultural tractors, incorporating Differential GPS (DGPS) technology for enhanced accuracy using free and open source software. Integrated with IoT technology, the device was tested to receive GNSS data and other relevant information, including geographic coordinates (latitude and longitude), tractor speed, tractor direction, date, time, and the number of satellites receiving signals. The DGPS setup involves using one receiver as a base station and another on the tractor, where the base station provides correction data to improve positioning accuracy. The data collected by the receiver is transmitted to a signal processing device for mapping the coordinates, creating a route of the tractor's movement that is displayed on a real-time Web Map Application. This process includes error correction to ensure high accuracy. The IoT device was installed on the left rear wheel of the agricultural tractor. Test results show that the data from the developed device has an average accuracy of 22 centimeters, which is acceptable and sufficient for agricultural tractor positioning applications. Furthermore, this system enables real-time monitoring of the tractor's operations.
“Regional Land Cover Monitoring System: A Modular Land Cover System for Environmental Monitoring and Sustainability Applications”
Akkarapon Chaiyana;
Poster Presentations
The Mekong region, comprising Cambodia, Laos PDR, Myanmar, Thailand, and Vietnam, is essential for agriculture and aquaculture, producing rice, crops, cassava, maize, sugarcane, and fish, thereby contributing to global food security. Additionally, this region acts as a significant carbon sink, absorbing greenhouse gases to mitigate climate change, regulate surface temperatures, and sustain ecosystems and biodiversity. However, factors such as rapid urbanization, severe floods and droughts, economic trade-offs, and rising sea levels are altering land use and land cover (LULC) patterns.
Mitigation and adaptation strategies are crucial for informed decision-making and policy development. A sustainable approach begins with the development of LULC maps, especially through over 20 years of monitoring for visual interpretation. Each country within the region has its policies addressing priority issues. To support the Mekong region, the Regional Land Cover Monitoring System (RLCMS) has been operational from 2000 to 2023. Modern technologies such as remote sensing, artificial intelligence, cloud computing, machine learning, deep learning, and Google Earth Engine (GEE) facilitate pixel-level to global-level analysis.
This study aims to map long-term LULC changes in the Mekong region using Landsat imagery from 2000 to 2023. Due to the region's tropical monsoon climate and prevalent cloud cover, the LandTrendr Optimization Algorithm (LTOP) was employed to minimize errors through time series interpolation, filling gaps caused by cloud obscuration. Nineteen LULC types were defined based on end-user objectives and land cover typologies from various organizations, including aquaculture, barren, cropland, crop plantation, deciduous forest, evergreen forest, flooded forest, forest plantation, grassland, mangrove, other forest, palm, rice, rubber, shrub, urban, water, wetland, and snow.
The reference data included a combination of field observations and high-resolution imagery from sources such as PlanetScope and time series data, amounting to over 300,000 data points. This reference data was collated from various collaborators, including national partners and organizations such as the Land Development Department (LDD) in Thailand, the Global Forest Resources Assessment (GFRA), the Food and Agriculture Organization (FAO), the Forest Department of Myanmar, the Space Technology Institute in Vietnam, the Wildlife Conservation Society (WCS) in Cambodia, and the Forest Inventory Planning Division of Laos PDR. These data were photo-interpreted and labeled using very high-resolution (VHR) imagery in the Collect Earth desktop application.
Machine learning (ML) and deep learning (DL) techniques were used to process land use probabilities and generate a primitive map. The study employed a random forest (RF) model to map evergreen, deciduous, and flooded forests based on criteria of large area, similar texture, and color, while the remaining primitive maps were refined using the EfficientNetV2 model. A hierarchical rule from Decision Tree (DT) was then applied to the assemblage structure using Monte Carlo simulation methods, incorporating additional criteria from the Land Cover Classification System (LCCS) by adding Tree Canopy Cover (TCC) and Tree Canopy Height (TCH) from Global Land Analysis and Discovery (GLAD) laboratory to reduce forest mapping errors. The logical transition approach was used to verify each pixel as post-process the data, ensuring robust RLCMS type results. Validation of the RLCMS map yielded an overall accuracy of 0.72 and a kappa statistic of 0.70.
In conclusion, the RLCMS developed through this study provides a reliable tool for monitoring long-term land use and land cover changes in the Mekong region, thereby supporting informed decision-making and policy development to address environmental and socio-economic challenges. The integration of advanced technologies such as remote sensing, machine learning, and cloud computing ensures high precision and efficiency in data analysis. Additionally, this system is universally applicable, as it utilizes publicly accessible global data (Landsat) and features an adaptable architecture that allows for customizable assembly logic to map various land cover typologies according to specific landscape monitoring objectives worldwide.
“Research on Sustainable Agricultural Management Using Agricultural Water Circulation Measurement Data”
JiHyeon Lee, SuhyeonKim, Mijin Lee;
Poster Presentations
The amount of agricultural water used from water resources in South Korea is about 42%, yet quantitative data are insufficient despite this high usage. When calculating the supply amount based on the reservoir level, there is high uncertainty due to the difference between the actual and designed effective storage.
The purpose of this study is to develop technology that enables more efficient and sustainable agricultural management by utilizing measured data such as rainfall and reservoir discharge to present optimal water management plans through various scenario analyses based on the data. By applying digital twin technology for 3D visualization, the circulation process of agricultural water can be analyzed more effectively.
“Resilient and Regenerative City Development in Response to the Global Climate Crisis”
Keynote Talk
The global climate crisis poses significant challenges, particularly for urban areas. Nebula, a specialist in resilient and regenerative city development, advocates for a paradigm shift towards creating "happy future cities." Unlike traditional approaches that focus on rebuilding from scratch, Nebula emphasizes fostering an “environmentally enhancing, restorative relationship” with nature. This explores how cities can play a crucial role in addressing environmental challenges through strategic urban planning.
We pioneer a transformative approach to urban development, creating future cities that are not just smart, but regenerative, resilient, and centered on the well-being of both people and the planet.
“Rust for Geospatial Data Processing: A Case Study with CityGML Converter for PLATEAU, Japan's Open Digital Twin Models”
Sorami Hisamoto;
General Track
Rust is an open-source programming language renowned for its performance, reliability, and productivity. This case study focuses on our experience developing an official CityGML converter for PLATEAU, a project led by Japan's Ministry of Land, Infrastructure, Transport, and Tourism to model and utilize digital twin open data. The tool is publicly available as open-source software: https://github.com/Project-PLATEAU/PLATEAU-GIS-Converter
With this tool, you can convert the original CityGML data to arbitrary formats such as 3D Tiles, Mapbox Vector Tiles (MVT), GeoPackage, GeoJSON, KML, CZML, and even Shapefile.
Our decision to use Rust was driven by its efficiency and robust features, making it an ideal choice for handling complex low-level data processing tasks. Additionally, we adopted Tauri, a Rust-based open-source toolkit that enables the creation of cross-platform desktop applications using web frontend technologies.
In this talk, we will explore the reasoning behind our choice of Rust, the challenges we encountered during the development process, and the benefits we gained by leveraging this technology stack.
“Scrollytelling the 53 Stations of Tōkaidō: An Interactive Journey Through Japan’s Historic Route”
Sorami Hisamoto;
General Track
The 53 Stations of the Tōkaidō (東海道五十三次, Tōkaidō Gojūsan-tsugi) are iconic rest areas along the historic coastal road that once connected modern-day Tokyo to Kyoto. This route is renowned for its Ukiyo-e (浮世絵) prints, a distinctive form of Japanese painting that flourished during the Edo period from the 17th to the 19th centuries.
To bring this historic route to life, we have developed a web-based “scrollytelling” experience (a combination of “scrolling” and “storytelling”) that invites users to interactively traverse this historic route via a dynamic map. You can explore it yourself at https://sorami.dev/tokaido-scrollytelling/ !
This project harnesses Mapbox GL JS and a variety of open-source technologies, including Turf.js, Svelte, UnoCSS, and Scrollama. These tools, combined with open data for the route, stations, and accompanying artworks, enable us to offer a rich and engaging experience. All data and code are available at https://github.com/sorami/tokaido-scrollytelling.
In this talk, we will explore the potential and challenges of scrollytelling with maps—a contemporary method of content presentation enabled by modern digital tools. We will discuss the strengths and limitations of this storytelling style, examine various technologies available for creating such experiences, and detail the development process behind our project.
“Sen2Extract: A Free Online Tool to Access Environmental Index Time Series from Sentinel 2”
George Ge;
General Track
Since the European Space Agency Copernicus’s launch of the Sentinel satellites in 2015, there has been a rising interest in adapting free and high-quality geospatial data to inform scientific research. Of particular interest are environmental indices from Sentinel 2 (e.g. NDVI, MNDWI) which can be utilized for analysis and modelling in various topics in health, agriculture, and biodiversity.
However, while the Sentinel 2 imagery are freely available, the process of acquiring, deriving, and extracting meaningful data is not very straightforward. Sen2Chain, a tool developed in Python to automate the acquisition of Sentinel 2 images and to calculate these indices, and Sen2Extract, a tool developed in R to interact with Sen2Chain from the web, were created to address this problem.
This talk explores how we built these tools and applied them to various projects around the world, and how you can potentially adapt them for your own projects.
“Server-Side and Client-Side Topology rule-based by python from Provincial Waterworks Authority”
PEERANAT PRASONGSUK, NATPAKAL MANEERAT;
General Track
Geospatial Topology, a fundamental concept in geographic information systems, focuses on the analysis and characterization of spatial relationships between geographic entities without alteration of their intrinsic properties. This presentation examines the implementation of Geospatial Topology rules utilizing Python programming language, facilitating execution in both client-side and server-side environments.
We present an empirical case study from the PWA GIS Department of Thailand, which employs a comprehensive set of over 30 topology rule-based validations to ensure data integrity and consistency across national cartographic operations. The research investigates two primary platforms: QGIS Desktop and Web Applications, both serving as client-side interfaces capable of executing topology scripts and generating inconsistency reports for subsequent rectification.
A critical distinction between QGIS Desktop and Web Application lies in their respective execution paradigms: QGIS Desktop operates within a local environment, while the Web Application leverages server-side processing capabilities.
This presentation will elucidate methodologies for developing custom topology rules and demonstrate techniques for accessing single-algorithm Python scripts across both Desktop and Web Application environments. By bridging the technological gap between these platforms, we aim to enhance the efficacy of geospatial data quality control processes and optimize GIS workflows.
The findings of this research contribute to the broader understanding of cross-platform geospatial topology implementation and offer practical insights for GIS professionals and developers seeking to improve data validation processes in diverse computational environments.
“SERVIR Southeast Asia Air Quality Explorer: A Tool Harnessing Satellite and Modeling Data for Pollutant Monitoring in the Region”
Thannarot Kunlamai;
General Track
Air pollution in Southeast Asia has reached critical levels, significantly impacting human health across the region. Nearly the entire population lives in areas where air pollution exceeds the World Health Organization’s (WHO) safe air standards. This severe pollution is primarily due to rapid industrialization, urbanization, and deforestation, which have increased the amount of harmful pollutants in the air, particularly fine particulate matter known as PM2.5. Seasonal agricultural burning, a common practice in the region, also contributes significantly by releasing large quantities of smoke and particulate matter into the air. The rapid pace of urbanization has led to increased vehicle emissions and construction activities, further degrading air quality.
To address this issue, SERVIR Southeast Asia (SEA), a joint initiative of the U.S. Agency for International Development, the National Aeronautics and Space Administration (NASA), and the Asian Disaster Preparedness Center (ADPC) — its implementing partner, has developed the "SERVIR Southeast Asia Air Quality Explorer" to monitor air pollution and health impacts using satellite data and atmospheric modeling. The application uses advanced data visualization techniques to present complex datasets in an accessible manner. By harnessing the power of satellite data and predictive models, we hope that the SERVIR Southeast Asia Air Quality Explorer (SEA AQE) serves as a valuable resource for policymakers, researchers, and the general public, empowering them to make informed decisions to mitigate the adverse effects of air pollution.
The Air Quality Explorer features a user-friendly interface accessible on both desktop and mobile devices, allowing users to monitor real-time air pollution levels, including three-day forecasts of PM2.5 with a 5 km resolution and NO2 from Geostationary Environment Monitoring Spectrometer (GEMS). The application also features a fire hotspot map, helping users anticipate changes in air quality. It ranks cities over the Southeast Asia regions based on their PM2.5 levels and integrates PM2.5 data with a health index to translate the data into actionable health recommendations. Additionally, the tool includes six-hourly forecast wind data from NOAA and ground station data from Thailand’s Pollution Control Department (PCD), offering a comprehensive view of air quality dynamics across the region.
This project highlights the potential of combining satellite technology and forecast modeling with web-based platforms to improve environmental monitoring and decision-making in Southeast Asia. SERVIR SEA and collaborators will continue to enhance this tool with new useful data, such as high resolution of forecast PM2.5, fire risk, and fire emission inventory products, enabling users to link and analyze these with air pollution indicators. Additionally, the power of large language models (LLMs) will be applied to this tool, allowing users to input queries in natural language. This feature will translate user input into data retrieval commands, providing users with the desired results and making the tool even more accessible and user-friendly. Furthermore, we will develop “SERVIR SEA AQ API” service to provide air pollution satellite image data and json format data for integration on other platforms.
“SERVIR Southeast Asia Biophysical M&E Dashboard: A Tool to Support Landscape Monitoring”
MD KAMAL HOSEN;
General Track
Environmental degradation in Southeast Asia, particularly in Cambodia, is an alarming issue that poses significant threats to both ecosystems and human well-being. The region is experiencing rapid deforestation driven by agricultural expansion and urban development, leading to substantial loss of forest cover. This deforestation disrupts ecological balance and biological environments, contributing to habitat fragmentation, biodiversity loss, and alterations in local microclimates. Concurrently, changes in land use and land cover exacerbate these problems, further fragmenting habitats and impacting species distribution. The increasing risk of forest fires, fueled by climate variability, land use changes, and agricultural burning, adds another layer of concern, contributing to air pollution and posing risks to both natural and human systems.
In response to these growing challenges, a comprehensive monitoring tool is essential for systematically observing and analyzing environmental parameters. Such a tool would provide critical data necessary to address these issues, inform policy decisions, and support sustainable land management practices. Recognizing the need for a robust solution, SERVIR Southeast Asia (SEA)—a collaborative initiative of the U.S. Agency for International Development (USAID), the National Aeronautics and Space Administration (NASA), and the Asian Disaster Preparedness Center (ADPC)—has developed the “Biophysical Monitoring and Evaluation (M&E) Dashboard” for Cambodia. This open-access tool is designed to offer comprehensive, near-real-time insights into critical environmental parameters, leveraging advanced technologies to support environmental protection and sustainable management.
The Biophysical M&E Dashboard utilizes a range of cutting-edge technologies to analyze and visualize environmental data. It harnesses the power of Google Earth Engine (GEE), a cloud computing platform capable of processing vast amounts of satellite imagery. GEE is employed to analyze large-scale, open satellite data to map key indicators such as the Enhanced Vegetation Index (EVI), land use and land cover, forest cover, forest fire occurrences, and crop monitoring. The tool integrates these insights with data from GeoServer, an open-source geospatial data publisher, and PostgreSQL, a powerful open-source relational database system. Modern web technologies, including React with NextJS and the Python-based Django framework, are used to develop the user interface and ensure seamless functionality.
The M&E Dashboard is designed to integrate multiple data sources, providing a holistic view of landscape dynamics. It visualizes and analyzes critical aspects such as forest cover changes, land use transformations, vegetation health, and fire hotspots. By offering detailed analytical information at various levels—country, province, district, and designated protected areas—the tool enables users to gain a nuanced understanding of environmental trends. The dashboard’s current capabilities include monitoring forest gain and loss, assessing rice cropping fields, and tracking deforestation and fire hotspots. Future plans include expanding its functionality to support user-defined area levels and the incorporation of socio-economic and vulnerability indicators, further enhancing its adaptability and utility.
In addition to land use and land cover monitoring, the M&E Dashboard incorporates weather and climate information to support sustainable agriculture practices. This integration provides valuable insights into the impacts of climatic conditions on crop health and agricultural productivity, and also a drought assessment framework, aiding in the development of strategies to mitigate adverse effects and enhance resilience.
This paper presents a detailed overview of the architecture, design, and functionality of the Biophysical M&E Dashboard. It outlines how the tool addresses critical environmental challenges in Cambodia, including deforestation, habitat fragmentation, and fire risks. By offering a comprehensive suite of analytical features and visualizations, the dashboard supports informed decision-making and strategic planning for sustainable landscape management. Through its integration of advanced technologies and multi-source data, the Biophysical M&E Dashboard stands as a vital resource for protecting Cambodia’s natural resources and promoting ecological resilience in the face of ongoing environmental pressures.
“Social Media Data analysis in a Restaurant Context : A Case Study of TikTok”
Asamaporn Sitthi;
General Track
This study explores the integration of Natural Language Processing (NLP) and Geographic Information Systems (GIS) to analyze the spatial distribution and sentiment of restaurants based on TikTok data. Data was collected from TikTok using primary and secondary hashtags related to restaurant reviews in Bangkok. The resulting database enabled a detailed analysis of restaurant locations and customer sentiment using Logistic Regression for sentiment analysis. The findings indicate that negative reviews were predicted with the highest accuracy (84%), followed by positive (78%) and neutral (76%) reviews. The spatial analysis identified a dense of restaurants in the inner districts of Bangkok. This integration of NLP and GIS not only mapped the popularity of restaurants as mentioned on TikTok but also provided significant insights into consumer behavior and preferences. The study demonstrates the effectiveness of combining NLP and GIS for geospatial analysis, offering a powerful tool for understanding social media trends and their impact on local businesses. The results underscore the potential for leveraging social media data to inform urban planning and business strategies, particularly in the context of food and hospitality industries.
“Spatio-Temporal Drought Monitoring in the Chi River Basin from 2001–2020 Using MODIS Time Series and Google Earth Engine”
Jaturong Som-ard;
Academic Track (Oral)
Drought is a recurring issue in South Asia (SA) caused by extreme climate events, posing ongoing challenges for food management, sustainable agricultural practices, and livelihoods, especially in frequently affected areas. Earth Observation (EO) data provides valuable information for long-term drought monitoring across wide regions. However, there remains a need to map and monitor spatial drought events over large regions and extended periods, particularly in the Chi River Basin. In this region, droughts have increasingly occurred with high frequency, leading to low water-holding capacity and adversely affecting agricultural production and productivity.
In this context, this study aimed: i) to identify spatial drought from 2001 to 2020 over the Chi River Basin, Thailand using MODIS image time series via Google Earth Engine (GEE); ii) analyse the correlation between Temperature Vegetation Dryness Index (TVDI), Standardized Precipitation Index (SPI), and Streamflow Drought Index (SDI); and compare severe drought areas to land use maps provided by the Land Development Department (LDD).
In this study, we collected MODIS data using the MYD09Q1 (250m spatial resolution) and MOD11A1 products (1000m pixel size) across the h27v07 and h28v07 tiles. Image pre-processing was implemented, consisting of image resampling, data compositing, and image mosaicking through the GEE platform. Subsequently, the TVDI index was generated for both dry and wet seasons to map and monitor the spatial distribution of drought events over 21 years. The spatial drought based on TVDI and meteorological datasets (SPI and SDI) was determined to identify their relationship. Additionally, this study compared the spatiotemporal distribution of drought to land use groups.
Historical droughts were most frequent during the dry seasons of 2005 (82%), 2013 (80%), and 2004 (78%), and appeared in the wet seasons of 2019 (41%), 2017 (41%), and 2009 (38%). The TVDI drought map had a slightly low coefficient of determination (R²) for SPI and SDI, ranging from 0.12 to 0.22. However, these findings showed similar trends of drought across all study years, with drought events predominantly occurring in the central and northeast parts of the region. In comparison, the spatial drought map in 2021 showed severe droughts, mostly impacting cassava and rice fields during the dry season and urban areas during the wet season. Our proposed workflow is reliable and robust, providing spatial drought areas with confidence in the accuracy and validity of our results.
This study produced spatial drought maps using MODIS image time series datasets. The mapped results were well smoothed and effectively distributed drought areas across large regions. The highly severe spatial droughts in 2005 align with Thailand's extreme drought due to the El Niño event, demonstrating high severity compared to other years. This confirms that the TVDI index provides excellent and efficient map results for mapping the spatial distribution of droughts in cloudy regions and complex landscapes of the Chi River Basin. The proposed workflow can generate drought maps in cloudy regions and complex landscapes over large or national regions, particularly in zones like Thailand. Our findings can be used to manage future droughts and serve as a significant tool for drought mitigation planning and management, as well as for warning systems, providing an integrated model under climate change conditions.
Keywords: Drought; earth observation; Temperature Vegetation Dryness Index (TVDI); Land use; Google Earth Engine
“STAC: Driving Innovation in Geospatial Applications”
Siriya Saenkhom-or;
General Track
The SpatioTemporal Asset Catalog (STAC) revolutionizes geospatial applications by providing a standardized framework for cataloging spatiotemporal data. Developed in 2017 through a collaborative effort among various organizations, STAC streamlines the discovery and retrieval of geospatial assets, making it easier for users to access satellite imagery and other spatial data. This open-source specification, which aligns with FAIR principles—Findable, Accessible, Interoperable, and Reusable—promotes interoperability among various data providers and applications, fostering innovation in the geospatial community.
STAC's design allows for automated data retrieval through the STAC API, making it especially useful for applications in environmental monitoring, disaster management, and urban planning. Its JSON-based structure enhances user accessibility, allowing developers to quickly integrate geospatial data into their workflows. Furthermore, STAC's extensibility ensures it can adapt to a wide range of geospatial data types, from remote sensing to 3D point clouds.
The benefits of STAC go beyond theoretical applications. In Thailand, STAC is applied to the GISTDA Decision Support System for Disaster Management Platform. On this platform, STAC catalogs vector data related to flooding areas, thermal activities, and drought indices. As a result, the implemented application can efficiently browse and retrieve data from the STAC catalog, enhancing data retrieval speed and user experience.
As the geospatial landscape continues to evolve, STAC stands out as a remarkable tool for driving innovation, enabling seamless data sharing, and empowering users to harness the full potential of geospatial technologies in addressing complex global challenges.
“State of mago3DTiler, an Open Source Based OGC 3D Tiles Creator”
Sanghee Shin;
General Track
In this session, I will introduce mago3DTiler (https://github.com/Gaia3D/mago-3d-tiler), an open-source OGC 3D Tiles creator that has gained global popularity thanks to its robust features, high performance, and user-friendly interface. Initially unveiled at FOSS4G-Asia 2023 in Seoul, mago3DTiler supports over ten different 3D data formats, including 3DS, OBJ, FBX, glTF, Collada DAE, BIM (IFC), LAS, LAZ, and SHP. One of its standout features is on-the-fly Coordinate Reference System (CRS) conversion during the 3D Tiles creation process. Additionally, it allows users to convert 2D data with height attributes into extruded 3D Tiles.
During this session, I will also demonstrate how to create a digital twin using mago3DTiler in just a few minutes. This tool makes complex geospatial tasks more manageable, especially for users looking to integrate diverse data formats seamlessly into 3D projects.
“Thailand Dialogue on Open Data Governance, Privacy, and Legality”
Assist.Prof.Prapaporn Rojsiriruch;
Keynote Talk
My presentation will explore the landscape of open data in Thailand, focusing on laws, regulations, policies, and the fundamental right to access information. It will assess how open data initiatives can foster transparency, innovation, and public engagement, while addressing challenges and proposing solutions. Special emphasis will be placed on how open data can play a crucial role in reducing inequality by empowering citizens with greater access to information, enabling more equitable participation in decision-making processes, and driving inclusive social development.
“The Application of Google Earth Engine for PM2.5 Estimation to Analyze PM2.5 Distribution From in Saraburi Province”
Pattara;
Academic Track (Oral)
This research aims to estimate PM2.5 concentrations from Aerosol Optical Depth (AOD) and meteorological data and study the spatial distribution patterns of PM2.5 in Saraburi Province. The estimation of PM2.5 levels is conducted using AOD data combined with meteorological data through a Multiple Linear Regression (MLR) method. The estimated values are then used to analyze the distribution patterns of PM2.5. The study found that in 2018, the average monthly PM2.5 concentration ranged from 0 to 74.1 μg/m³, with high-value clustering (hot spots) covering approximately 421.43 km², or 12.04% of the provincial area. In 2019, the average monthly PM2.5 concentration ranged from 0 to 41.4 μg/m³, with hot spots covering approximately 509.29 km², or 14.55% of the provincial area. In 2020, the average monthly PM2.5 concentration ranged from 0 to 50.0 μg/m³, with hot spots covering approximately 648.37 km², or 18.53% of the provincial area. In 2021, the average monthly PM2.5 concentration ranged from 0 to 55.3 μg/m³, with hot spots covering approximately 562.93 km², or 16.09% of the provincial area. In 2022, the average monthly PM2.5 concentration ranged from 0 to 57.3 μg/m³, with hot spots covering approximately 615.97 km², or 18% of the provincial area. The most of high-value clusters were in the western part of the province, where agricultural activities are prevalent, contributing to higher PM2.5 levels. In contrast, low-value clusters (cold spots) were primarily found in the eastern part of the province, which is largely forested.
“The Challenges of Reproducibility for Research Based on Geodata Web Services”
Massimiliano Cannata;
Academic Track (Oral)
Modern research applies the Open Science approach that fosters the production and sharing of Open Data according to the FAIR (Findable, Accessible, Interoperable, Reusable) principles. In the geospatial context, this is generally achieved through the setup of OGC Web services that implement open standards satisfying the FAIR requirements.
Nevertheless, the requirement of Findability is not fully satisfied by these services since there’s no use of persistent identifiers and no guarantee that the same dataset used for a study can be immutably accessed in a later period. This fact hinders the replicability of research, particularly in recent years where data-driven research and technological advances have boosted frequent updates of datasets.
Here, we review needs and practices, supported by some real case examples, on frequent data or metadata updates in geo-datasets of different data types. Additionally, we assess the currently available tools that support data versioning for databases, files, and log-structured tables.
Finally, we discuss challenges and opportunities to enable geospatial web services that are fully FAIR. Achieving this would provide, due to the massive use and increasing availability of geospatial data, a significant push toward open science compliance, ultimately impacting science transparency and credibility.
“The Current State of Collaboration between Digital Twin and OSM in Japan: A Case Study of Project PLATEAU”
Taichi Furuhashi;
General Track
In recent years, 3D city models have become crucial for urban planning and research. Japan's Project PLATEAU has led the development of open 3D city models and point cloud data, with over 100 cities releasing Digital Twin data in CityGML format by February 2023. This talk explores the collaboration between Japan's Digital Twin initiatives and the global OpenStreetMap community. Since 2022, Japan's Digital Twin data, following the ODbL license, has been integrated with OpenStreetMap using specially developed tools. This integration aims to promote the global adoption of 3D city models, enhancing urban development through the synergy of Digital Twin technologies and OpenStreetMap.
“The MAGDA Project: Integration of GNSS, Sentinel, Meteodrone, and In-Situ Observations for Weather Warnings and Irrigation Advisories in Agriculture”
Eugenio Realini;
General Track
The Meteorological Assimilation from Galileo and Drones for Agriculture (MAGDA) project, funded by EUSPA in the framework of the Horizon Europe program, aims to develop a comprehensive toolchain for atmosphere monitoring, weather forecasting, and advisory services related to severe weather, irrigation, and crop monitoring. By integrating GNSS, Copernicus Sentinel, Meteodrone, ground-based weather radar, and in-situ weather and soil observations into open source weather and hydrological models, MAGDA seeks to provide valuable information to agricultural operators. Measured data, model results, and warnings/advisories are delivered to farmers through a dedicated dashboard or by interfacing with existing Farm Management Systems. The technical and methodological components developed within MAGDA will form the basis for services supporting agricultural operations.
The project is based on the concept that continuous monitoring, combined with advanced prediction models, is essential for effective resource management. As extreme weather events, such as droughts and heatwaves, become more frequent due to climate change, farmers need to leverage technology to mitigate disasters, conserve resources, and enhance productivity. A system that can automatically collect and process measurements of key parameters significantly reduces economic losses. When this data is presented clearly and usefully to end users, it can significantly enhance agricultural efficiency.
MAGDA unites seven partners from seven European countries (Austria, France, Italy, Romania, Spain, The Netherlands, and Switzerland) and is inherently interdisciplinary, drawing on expertise from various sectors to develop a system tailored to agricultural meteorological and hydrological forecasts.
The selected demonstrator areas in Italy (Cuneo), France (Burgundy), and Romania (Braila) target different crops/cultures and allow for the gathering of different user needs and feedback through direct interactions with farmers. Deployment includes nine low-cost, dual-frequency GNSS stations, along with fifteen low-cost in-situ sensor stations, and three Meteobases to fly meteodrones.
Severe weather cases were identified to test the open source WRF meteorological model's performance: in Italy, the focus was on rainfall events, while in France and Romania, hail events were prioritized. Water balance simulations were conducted to support an operational irrigation advisory service, using the open source SPHY hydrological model across the pilot areas in France, Italy, and Romania.
All data used in the MAGDA project are open for research applications, and the GNSS processing software utilized in this project leverages the goGPS open source software. The MAGDA dashboard for result visualization uses Leaflet as a web mapping tool and OpenStreetMap data as a background layer. The results presented here are derived from the currently ongoing MAGDA demonstrators, showcasing the project's impact on weather forecasting and water management for agricultural operations.
“The QGIS Shredder Plugin Inspired by Banksy’s Shredder: Sustainable shredder with no waste”
Naoya Nishibayashi;
General Track
In October 2018, the art world witnessed an unprecedented event when Banksy’s “Girl with Balloon” partially shredded itself immediately after being auctioned at Sotheby’s. This bold act challenged traditional perceptions of art, value, and the role of the artist. Drawing inspiration from this event, I developed a unique QGIS plugin that shreds layers.
This plugin shreds the input data into shredded pieces and is implemented with a simple pyQGIS API.
It can be shedded with either vector or raster layers, and the fineness of the shredding can be set.
This plugin may seem useless at first glance, but it can be used when you want to mess up your data, when you have created data that you cannot show to others, when you just want to relieve stress, or when you want to feel like Banksy.
Most importantly, while shredding physical documents produces waste, shredding data creates no physical waste, making it a truly sustainable practice.
“The relationship between PM2.5 and solar cell electricity generation Using Aerosol Optical Depth (AOD)”
Sunattha Lalaeng;
Academic Track (Oral)
This study aims to analyze the relationship between PM2.5 concentrations, derived from Aerosol Optical Depth (AOD), and solar power generation a solar farm owned by A. Co., Ltd.(alias), in Samut Prakan Province, Thailand, This research utilizes PM2.5 data from pollution monitoring stations of the Pollution Control Department, AOD data from the MCD19A2.061 product, and solar power generation data from the Electricity Generating Authority of Thailand in 2022. The results indicate in summer season a negative correlation between PM2.5 concentrations and solar power generation R² = -0.7, meaning that as PM2.5 levels increase, solar power generation decreases. A regression equation used for power prediction achieved an accuracy of R² = 0.97. In contrast, a positive correlation is observed during the winter season R² = 0.6, indicating that as PM2.5 levels increase, solar power generation increases, with a prediction accuracy of R² = 0.93. No significant correlation is found during the rainy season, which may be due to other influencing factors. Predicting solar power generation in other areas should consider the different physical factors unique to each location.
Keywords: PM2.5, Aerosol Optical Depth, Solar Power, Solar Farm
“USING FOSS4G TOOLS WITH RDNDVI TECHNIQUES TO ANALYZE FLOOD HAZARD IN TROPICAL SE ASIA AREA AT WANG THONG RIVER BASIN, PHITSANULOKE, THAILAND.”
Kittituch Naksri | Chaiwiwat Vansarochana;
General Track
This study aims to find the results of using a free disaster mapping application developed on Google Earth, named “Hazmapper”. This tool allows users to create maps and GIS products from Sentinel or Landsat datasets without the high time and cost usually needed for traditional analysis.
The initial design of the HazMapper program used indicators based on the Normalized Difference Vegetation Index (NDVI). Specifically, it developed the relative difference NDVI (rdNDVI) to identify areas where vegetation was removed after natural disasters. Because these indicators rely on vegetation, HazMapper is unsuitable for desert or polar regions, which means appropriate for tropical areas.
Using the rdNDVI indicator for different years in the same area and comparing the average absolute error (MAE) of all results to test the effectiveness of the Hazmapper model in application to flooded areas.
“Using Opensource 3D geospatial In Large Scale Chemical Incident Assesment”
Hakjoon Kim, Geonhee Jo, Jinhun Kim;
Poster Presentations
We introduce a research case where a large-scale chemical accident that cannot be realized or tested in the offline real world was implemented and evaluated in a three-dimensional virtual space.
“Vector tiles cartography for Asia”
Nicolas Bozon;
General Track
Vector tiles are changing the way we create maps. Client-side rendering offers endless possibilities to the cartographer and has introduced new map design tools and techniques. Let’s explore an innovative approach to modern cartography based on simplicity and a comprehensive vector tiles schema. Take a visual tour of vector tiles cartography, and learn how to possibly adapt the map design for an asian audience.
“Visualizing and Managing Smart Grids with Geospatial Big Data: The SEMS Approach”
Venkata Satya Rama Rao Bandreddi;
General Track
Geospatial big data plays a pivotal role in the context of smart grids, revolutionizing the way modern electrical grids are monitored, managed, and optimized. Smart grids integrate advanced sensing, communication, and control technologies to enhance the efficiency, reliability, and sustainability of electricity distribution. While locational information of the smart meters is pivotal, the consumption patterns combined with other information like consumer type, land use and local weather conditions can really enhance the assessment of energy requirements and usage thus leading to a Spatial Energy Management System (SEMS). This will significantly enrich location-aware decision-making, real-time monitoring, and predictive analysis, thus improving energy resource optimisation and achieving the goal of SDG-7. The SEMS web application represents a critical advancement, facilitating dynamic visualization of temporal clusters, across energy sources and their evolution over time. By overlaying clusters across different time periods, utility personnel gain insights into customer categorizations vs actual utilization, enabling understanding of demand fluctuations, outages and faults, fault localization within the electric network and demand-generation assessment of renewable energy.
The primary objective is the development of a dynamic web-based SEMS application capable of visualizing temporal changes and consumption patterns, while also providing alerts to facilitate proactive management.
The initial phase of the methodology focuses on the Jeedimetla region in Hyderabad, leveraging real-world data encompassing 6,000 households categorized into residential, commercial, and industrial segments. Quantum Geographic Information System (QGIS) software is employed to establish the electric network. To store the spatial and temporal consumption data, the data storage infrastructure employs PostgreSQL, enhanced with the PostGIS extension. This combination is selected due to its robust capabilities in accommodating and managing complex datasets. Additionally, a comprehensive and refined data model has been established to serve as the framework for storing Advanced Metering Infrastructure (AMI) data. Spatial data retrieved from the PostgreSQL database is exposed through a Web GIS Server (GeoServer) as a Web Feature Service, facilitating its integration into applications. This spatial information is utilized within the OpenLayers Software Development Kit (SDK) to visually render data within a JavaScript-based application environment. Non-spatial data is accessed through Application Programming Interfaces (APIs) integrated within a Node server, operating as REST endpoints. Concurrently, the React JavaScript library is employed to present this non-spatial information to end-users in an interactive format.
The SEMS Geospatial Visualization Engine comprises three key components. The first component features two primary views: the Basic View, which presents hourly usage consumption data, and the Combined View, integrating map and graph interfaces. This combined view allows users to select specific time periods, customer locations, transformers, or feeders, with the graph view displaying corresponding hourly consumption data. Both views facilitate the visualization of consumption patterns for customer classes or types, which are pre-defined in the system.
The second component of the SEMS Visualization Engine aggregates data at the daily level, classifying users into low, medium, and high consumption categories. These classifications are visually represented on the map view, providing insights into consumption trends across the study area.
The third component integrates weather data sourced from an open-source Weather API with customer energy consumption data. Weather information is stored in the PostgreSQL database, and the combined view enables the graphical representation of weather data changes alongside energy consumption data. This integration enhances understanding of how weather fluctuations impact customer energy usage patterns.
By leveraging advanced sensing, communication, and control technologies, three components of SEMS Geospatial Visualization engine framework provides a comprehensive platform for understanding and visualizing energy consumption and demand. The spatial view aids in developing spatial clusters, which assist electric utility personnel in comprehending consumption patterns and energy demand requirements for specific areas, facilitating efficient management of power shortage scenarios.
The SEMS visualization engine supports various utility use cases, including energy loss identification, detection of energy overuse, and dynamic reclassification of users based on current consumption data. As a prospective avenue for future research, integrating this Geospatial Visualization Engine framework with machine learning-based prediction models holds promise for forecasting energy consumption and dynamically classifying users. Such advancements are anticipated to further empower utility personnel in decision-making, real-time monitoring, and predictive analytics within smart grid environments.
“ZOO-Project - OGC API - Processes - Introduction”
Gérald Fenoy;
Workshop Proposals
The ZOO-Project will first be presented, along with details about the OGC API - Processes part 1: core. The participants will then learn how to set up the ZOO Kernel and to get an OGC API - Processes server running in a few simple steps. Some basic services will be presented to the attendees to give them the capability to reuse them later in their own application. Then, they will learn how to develop simple service using the Python language, through simple programming exercises. A ready to use client will be used to interact with the available OGC API - Processes services and the one to be developed. Participants will finally learn how to chain the existing services using the server-side Javascript ZOO-API.
“ZOO-Project: news about the Open Source Generic Processing Engine”
Gérald Fenoy;
General Track
The ZOO-Project is an open-source processing platform released under the MIT/X11 Licence. It provides the polyglot ZOO-Kernel, a server implementation of the Web Processing Service (WPS) (1.0.0 and 2.0.0), and the OGC API - Processes standards published by the OGC. It contains ZOO-Services, a minimal set of ready-to-use services that can be used as a base to create more useful services. It provides the ZOO-API, initially only available from the JavaScript service implementation, which exposes ZOO-Kernel variables and functions to the language used to implement the service. It contains the ZOO-Client, a JavaScript API that can be used from a client application to interact with a WPS server.