“Accident SmartBot : An AI Chatbot-Powered Smart Map for Accident Reporting System”
Kampanart Piyathamrongchai;
Academic Track (Oral)
The research project “Accident SmartBot: An Intelligent Map System for Reporting Road Accidents via an AI Chatbot” develops a web-based analytical platform designed to improve access to road accident information in Thailand. The system integrates artificial intelligence (AI), geographic information systems (GIS), and natural language processing (NLP) to allow users to interact with accident data through everyday language. A dataset covering six years and totaling 134,500 records was collected from national accident reports and stored in MongoDB using the GeoJSON format to support spatial queries. The backend employs n8n workflow automation in combination with Google’s Gemini AI to interpret user queries, extract search parameters, and generate meaningful responses. The frontend uses Leaflet.js to present interactive maps that visualize accident locations with markers and clustering techniques, enabling users to explore spatial patterns intuitively.
Users can retrieve information by specifying provinces, time periods, or causes of accidents, and the system provides both summary statistics and detailed spatial distributions. Additionally, the chatbot can interpret follow-up questions, making the interaction conversational rather than menu-driven. This enhances accessibility for general users, government agencies, and researchers who require quick situational awareness without advanced GIS expertise. The study demonstrates that combining AI-driven dialogue with GIS-based visualization can significantly improve the usability of accident analysis systems. Accident SmartBot reflects an innovative approach to leveraging emerging technologies to support traffic safety management, accident prevention planning, and public awareness in Thailand.
“Advanced PyQGIS Scripting and Automation”
Vigna Purohit;
Workshop Proposals
PyQGIS combines the accessibility of QGIS with the power of Python scripting, opening possibilities that extend far beyond point-and-click operations. Whether processing hundreds of datasets with consistent parameters, building custom analytical tools for repeated use, or generating automated reports with maps and statistics, PyQGIS transforms hours of manual work into minutes of automated execution.
This workshop focuses on building production-ready automation systems for operational GIS workflows. It demonstrates how to architect complete automation pipelines that integrate multiple analytical components, generate professional reports, and handle real-world complexity. The workshop progresses from fundamentals to advanced automation. Participants begin with PyQGIS essentials like accessing layers, filtering features, and manipulating geometries programmatically. We then advance to practical automation scenarios including batch processing of multiple datasets, standardizing diverse data sources, and applying consistent transformations across large file collections. Participants will learn to build reusable class-based tools enabling them to create maintainable automation solutions for their organizations.
This approach is invaluable for GIS professionals handling repetitive analytical tasks, researchers requiring reproducible workflows, government agencies generating periodic reports, and organizations processing large volumes of spatial data. Participants learn not just to write scripts, but to architect complete automation solutions that save time, ensure consistency, and enable analyses that would be impractical through manual methods.
“Advancing Flood Prediction Lead Time: Automated Parallel Data Assimilation, Modelling, and Visualization (Open-Source Framework)”
Girishchandra Y;
Academic Track (Oral)
Floods are among the most frequent and devastating natural disasters, posing significant challenges to communities, economies, and ecosystems worldwide. With the growing intensity of extreme rainfall events due to climate change, enhancing the accuracy and lead time of flood prediction has become an urgent scientific and societal priority. Traditional flood forecasting frameworks are often constrained by sequential data processing, manual data assimilation, and limited computational capacity—factors that hinder real-time decision-making and timely early warning dissemination.
This project proposes an open-source, automated, and parallelized framework designed to revolutionize flood forecasting by integrating automated data assimilation, high-performance modelling, and advanced visualization into a unified system. The proposed solution emphasizes transparency, scalability, and accessibility, leveraging the power of open data and open technologies to advance flood prediction science.
The first component, Automated Data Assimilation, will integrate multi-source datasets—such as satellite precipitation, radar rainfall estimates, river discharge, and soil moisture—into a unified data stream. Open-source tools like GDAL will be used to automate ingestion and preprocessing. This ensures continuous and up-to-date inputs to the modeling system with minimal human intervention.
The second component, Parallel Hydrodynamic Modelling, will utilize high-performance computing (HPC) architectures to execute large-scale flood simulations efficiently. Open-source solvers such as ANUGA Hydro will be employed for physics-based flood modeling, supported by distributed computing frameworks like MPI4Py for parallel execution. This parallelization approach will substantially reduce computation time and extend the spatial-temporal coverage of forecasts, improving prediction lead time.
The third component, Visualization and Dissemination, will focus on transforming model outputs into actionable insights through open-source GIS and web technologies. Tools such as OpenLayers, GeoServer, and PostGIS will enable real-time visualization of inundation maps, hydrographs, and flood risk zones through a user-friendly web dashboard accessible to decision-makers and the public.
The framework will operate as a fully automated pipeline, ensuring daily data assimilation, model execution, and visualization updates.
By integrating automation, parallelism, and open-source technologies, this project aims to enhance flood prediction lead times, reduce dependency on proprietary systems, and foster reproducible research. Ultimately, the proposed framework will empower agencies, researchers, and communities to make faster, data-driven decisions in flood risk management and resilience planning.
“AI Guided SAR Remote Sensing: From Theory to Intelligent Applications”
Yogesh Regmi, Ashok Thakulla;
Workshop Proposals
Synthetic Aperture Radar (SAR) remote sensing provides powerful capabilities for monitoring Earth’s surface under all weather conditions. However, its data complexity and interpretation challenges often hinder large scale operational use. This workshop introduces an open-source framework for AI- Guided SAR Remote Sensing, demonstrating how Artificial Intelligence and Large Language Models (LLMs) can accelerate SAR data processing, enhance feature detection, and finding application in other sectors like agriculture, forest and disaster risk reduction.
Participants will learn to integrate Python-based tools such as GDAL, Rasterio, geopandas, Numpy, and Pytorch to automate SAR workflows for applications like flood mapping, landslide monitoring, and deformation analysis. Emphasizing both theory and practical implementation, the workshop empowers participants to build intelligent, reproducible pipelines that combine open geospatial software and machine learning for next-generation SAR analytics
The workshop will get started with introduction of Remote sensing (optical and Microwave) and then properties of SAR data, preprocessing the SAR data (multiloading, image co-registration and filtering and geocoding the SAR data). And the we directly jump into python and DL. After that with our background on SAR, we will generate the result by taking python code from LLM (chat gpt and deepseek) .
By the end of the session, participants will:
• Understand how AI and LLMs can be used to generate Python code for SAR data processing.
• Gain hands-on experience in reading and visualizing SAR imagery and apply metadata to preprocess and further process to generate the result from SAR data.
• Build and test a simple AI-driven SAR feature extraction workflow.
• Access ready-to-use Python notebooks and an open GitHub repository for further experimentation.
• Learn reproducible, scalable methods for integrating AI with open geospatial technologies.
“AI-Driven Cloud-Native Decision Support System for Near-Real-Time Deforestation Monitoring in Manipur Using Sentinel-2 and Attention U-Net”
Nilay Nishant;
Academic Track (Oral)
Deforestation is a critical environmental challenge across the globe, particularly in ecologically fragile regions where forest ecosystems sustain biodiversity, regulate climate dynamics, and support socio-economic livelihoods. The North-Eastern region of India, and specifically the state of Manipur, is one such hotspot where steep mountainous terrain, dispersed settlements, shifting cultivation practices, timber extraction, and anthropogenic pressures contribute to rapid and spatially fragmented forest loss. Despite the availability of continuous satellite Earth-observation missions such as the Sentinel-2 and Landsat series, converting raw multi-temporal satellite imagery into actionable deforestation intelligence remains a non-trivial task due to persistent cloud cover, sparse field-validated datasets, and delayed dissemination of monitoring outputs to decision-makers. Traditional vegetation change detection approaches rely heavily on annual land-cover products or handcrafted indices that lack the sensitivity to detect subtle and small-patch disturbances, which often characterize early-stage forest degradation. In response to these operational bottlenecks, this study proposes a comprehensive AI-driven, cloud-native decision support framework for automated deforestation detection, model inference, spatio-temporal analysis, and near-real-time alert dissemination tailored to Manipur’s unique forest landscape.
The proposed framework integrates weakly-supervised deep learning, cloud-optimized satellite data pipelines, and open-source WebGIS technologies to create an end-to-end system capable of monitoring deforestation with high temporal frequency and operational scalability. Multi-temporal Harmonized Sentinel-2 Level-2A imagery (2019-2024) was used to create NDVI-based weak labels by differencing annual median composites and applying a threshold of >0.25 NDVI decline to identify candidate forest-loss regions. Morphological spatial filtering and minimum area constraints (≥0.1 ha) were employed to eliminate noise and artifacts, generating cost-effective yet informative training masks in the absence of extensive ground truth. Spatial patches of 128 × 128 pixels extracted from multispectral and index-enhanced stacks were used to train two convolutional neural network architectures: a standard U-Net and an Attention-U-Net variant. The models were trained using BCE-Dice loss and evaluated using accuracy, F1-score, IoU, and precision-recall metrics. Attention-U-Net, which incorporates feature-refinement attention gates, demonstrated superior ability to delineate boundaries of disturbed patches, achieving an accuracy of 92% and F1-score of 0.89, outperforming the baseline U-Net.
Operational deployment of the best-performing model was achieved through a containerized, cloud-native inference pipeline capable of automatic querying of Spatio-Temporal Asset Catalog (STAC) endpoints hosted on Google Earth Engine and Microsoft Planetary Computer. The pipeline ingests newly acquired Sentinel-2 scenes, performs spectral index generation, normalizes spatial layers, tiles raster imagery, executes inference, mosaics prediction outputs, and clips them to reserve forest boundaries supplied by the Manipur Forest Department. Output rasters are stored as Cloud-Optimized GeoTIFFs and indexed in a PostgreSQL/PostGIS spatial database, ensuring compatibility with visualization services and analytics tools. The end-to-end latency from satellite overpass to alert dissemination is <4 hours, demonstrating operational readiness for rapid-response patrol planning. During a three-month pilot, no pipeline failures were recorded; in cases of temporary network issues, automated retry mechanisms ensured uninterrupted data flow, confirming the system’s robustness and fault tolerance.
To bridge the last mile between geospatial analytics and field-level governance, a WebGIS-based Decision Support System (DSS) and a companion React-Native mobile application were developed. A multi-tier architecture integrating GeoServer, PHP-based REST APIs, PostgreSQL/PostGIS, and React ensures interactive visualization, download capabilities, authentication, and monitoring of user activities. Real-time Firebase push notifications enable jurisdiction-specific alerts to reach Divisional Forest Officers instantly, while analytical dashboards summarize trends, hotspots, and proximity patterns relative to roads and settlements. Field teams can verify alerts through geotagged incident reporting modules even in offline environments, closing the feedback loop essential for continuous model improvement. Spatio-temporal analysis of the detected forest-loss patterns revealed peak disturbances during 2021–2022, with cumulative loss exceeding 220,000 ha (~6% of Manipur’s total forest area). The most severely affected districts include Tamenglong, Churachandpur, Pherzawl, and Kamjong, collectively contributing more than 70% of annual loss, with seasonal pulses observed from January to March.
This study demonstrates that a fully open-source and cloud-native pipeline, combining weakly supervised CNN architectures and operational WebGIS dissemination, can effectively deliver near-real-time alerts with high accuracy, thus strengthening forest governance in data-scarce regions. The results confirm the feasibility of integrating AI-based remote sensing with administrative workflows to support rapid patrol deployment, transparency, and evidence-based policy intervention. Future work will explore Sentinel-1 SAR fusion for monsoon-season monitoring, automated continual learning incorporating field feedback, and expansion from binary loss detection to driver-level attribution (logging, shifting cultivation, wildfire).
“An Open-Source Intelligence-Based Geospatial Approach to Predictive Modelling of Avian Influenza”
Mehak Jindal;
Academic Track (Oral)
Emerging infectious diseases pose growing challenges to public health, agriculture, and biodiversity. Avian Influenza (H5N1) has caused recurrent outbreaks across Europe in recent years, affecting both wild birds and poultry populations. These outbreaks highlight the urgent need for an innovative approach to predictive modelling that can integrate ecological, climatic, and socio-geographic factors to anticipate where and when risks are most likely to occur. Traditional surveillance methods often rely on retrospective case reporting and descriptive analyses, which provide limited capacity for early warning. In contrast, geospatial approaches enable the integration of diverse environmental and demographic drivers within models that capture the spatial heterogeneity and temporal dynamics of outbreaks.
The primary aim of this study is to develop and demonstrate a reproducible geospatial modelling workflow for predicting avian influenza outbreaks based on data between 2021 and 2025. The objectives are twofold: (1) to evaluate the relative influence of ecological and demographic variables on outbreak risk, and (2) to illustrate how open geospatial data and software can advance transparent, collaborative disease surveillance.
To achieve this, a multi-source dataset was assembled by combining weekly outbreak records from international surveillance systems with environmental and socio-geographic covariates. Potential predictors included remotely sensed indicators such as snow cover and land surface temperature, poultry density, wetland and waterway distributions, and human settlement intensity. All datasets were processed to a consistent spatial and temporal resolution, aggregated using open-source tools.
Exploratory spatial data analysis was conducted to identify clustering and spatial autocorrelation using neighbourhood-based statistics. Spatio-temporal models were then fitted using the endemic–epidemic framework implemented in the surveillance R package, which simultaneously models endemic risk, autoregressive behaviour, and spatio-temporal spread. To refine the predictor set, correlation screening and Variance Inflation Factor (VIF) analysis were applied to remove multicollinearity, followed by stepwise selection using Akaike Information Criterion (AIC) to balance model fit and parsimony. Random forest variable-importance measures were additionally used to identify robust and influential covariates across multiple resampling runs.
Preliminary findings indicate a distinct seasonal and geographic pattern in avian influenza outbreaks, with elevated risk in northern and coastal regions during colder months. Environmental conditions such as proximity to wetlands and increased snow cover were consistently associated with outbreak occurrence, while poultry density remained a dominant driver in domestic settings. These results underscore the importance of integrating environmental and demographic factors within predictive frameworks to enhance situational awareness and preparedness.
This study contributes to the growing field of health geographics by integrating open data, spatio-temporal modelling, and machine-learning-based feature evaluation within a unified and reproducible workflow. By focusing on avian influenza as a case study, the research demonstrates how open geospatial analytics can inform early-warning systems and guide targeted surveillance strategies. The proposed approach contributes to improving surveillance and preparedness for avian influenza, while offering transferable insights for predictive analytics of other zoonotic diseases influenced by environmental changes.
“Application of Machine Learning techniques for Landslide Susceptibility Mapping in Northern Vietnam”
Lam Tran Tung;
Academic Track (Oral)
Landslides are ubiquitous in terrestrial environments with slopes, frequently resulting from factors like heavy rainfall or tectonic activity. Landslide hazard assessment is crucial for managing and mitigating associated risks. Landslide Susceptibility Mapping (LSM) provides a practical and cost-effective tool for zoning areas prone to slope failure. This study aims to provide a reliable framework for LSM using Machine Learning (ML) models to generate susceptibility maps in Northern Vietnam’s mountainous regions. The methodology approaches landslide prediction as a binary classification task (landslide/no-landslide).
Van Yen (VY) district is selected as study area - the training area for the ML models, and Mu Cang Chai (MCC) district will be subsequently used for validating the models’ generalizability. Landslide locations are used as landslide points, located through field survey and optical satellite images, no-landslide points were randomly sampled where failures were not observed. This dataset incorporated 17 contributing factors, encompassing topographic, geologic, hydrologic, anthropogenic, and vegetation characteristics.
Digital Elevation Model (DEM) was used to calculate Stream Power Index (SPI) and Topography factors including Elevation, Slope, Aspect, Curvature (Profile and Planform Curvature), Terrain Ruggedness Index (TRI), Roughness, and Topographic Wetness Index (TWI). Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI) were calculated from Sentinel 2 images. Lithology, Land Use (LUCL) and buffer maps, including Lithologic Boundary, Faults, Roads and River, were extracted from the government geologic database. Data points are combined with influencing factors to create a balanced training dataset of 308 landslide and 308 no-landslide points for the study area.
The ML classification was performed on Python GeoInformatics Lab Environment-Plus (PyGILE-Plus) environment, featuring over multiple geospatial algorithms across major Geographic Information System (GIS) platforms, and key Python Machine Learning libraries like scikit-learn. The Probability–Frequency Ratio (FR) method was used to standardize and evaluate the landslide occurrence factors based on their spatial relationship with historical landslide distributions. Feature Selection process began with Principal Component Analysis (PCA), a technique used to analyze inter-correlated variables and better visualize their relationships through principal components. Subsequently, Correlation Matrix was computed using Pearson Correlation Coefficient to identify redundant factors. Factors exhibiting a pairwise correlation exceeding 0.75% were excluded. This process ensures model performance is maintained while dataset complexity is reduced. Feature Importance analysis, a sub-feature of the Random Forest model, was then deployed to evaluate the predictive impact of the remaining factors. Correlated and low importance factors can be removed without decreasing the ML performance.
Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), and Extreme Gradient Boosting (XGBoost)were trained on the VY dataset. Model optimization was achieved through Grid Search hyperparameter tuning combined with 10-fold cross-validation. Model performance was then assessed using accuracy metrics derived from the Confusion Matrix, specifically the Accuracy Score and Kappa Score, which compare ground truth against predicted classifications. The Receiver Operating Characteristic (ROC) curve and the Area Under the Curve (AUC) further measured the models’ capacity to distinguish between the two classification classes.
The results showed that RF and XGBoost demonstrated the highest and most consistent performance across both training (VY) and validation (MCC) phases. SVM scored the highest accuracy score of 0.88, following by XGBoost at 0.84, RF at 0.83 and finally LR at 0.81. However, during validation stop on MCC area, XGBoost had the highest accuracy at 0.74, followed by RF and LR both at 0.67. SVM remained at the bottom with 0.52, potentially due to its limitations in effectively handling complex, non-linear relationships present in the data.
To validate the learning capability of ML models, Convergence Rates of Efficient Global Optimization (EGO), a subset of Bayesian optimization, was deployed. The convergence rate tells how quickly EGO gets close to the best possible solution, or converges, as the number of tests (iterations) increases. RF and XGBoost continued to show to be the most capable models that converge at approximately 50 iterations and approximately 150 iterations.
Factor Roads have highest value during Feature importance analysis, aligning with field observations which show road construction and residential zoning contribute to slope failures. However, geological factors have low influence, suggesting future inventory will need to be updated with more geological data.
The resulting Landslide Susceptibility Maps (LSMs) visualize the spatial probability of slope failure using probability estimates categorized into five risk levels in each pixel. These LSMs can be used to reduce accidents and guide urban planning in the area. The workflow will be published and can be applied for other classification tasks. Future research will focus on improved landslide inventory techniques, including Interferometric Synthetic Aperture Radar (InSAR). Climate factors such as rain precipitation, wind and humidity will be added. The future research will also address different Global Optimization Algorithms to better validate the learning capability of ML models.
“Artificial Intelligence and Machine Learning for Agriculture Analytics”
Anil Kumar;
Workshop Proposals
The proposed workshop is designed to address some of the most persistent and technically challenging issues in agricultural remote sensing—particularly the problem of spectral overlap among different crop types, variations in crop growth patterns, and intra-field heterogeneity that complicates the accurate extraction of crop information. With advancements in satellite imaging, AI, and machine learning, there is a growing opportunity to develop robust, scalable solutions for timely and precise agricultural monitoring. Therefore, this tutorial will focus on three key objectives that collectively aim to strengthen the application of AI-augmented remote sensing for agricultural intelligence.
1. Developing an AI-based framework for monitoring crop sowing and harvesting stages using multi-temporal satellite imagery.
Understanding the exact timing of sowing and harvesting is crucial for crop management, seasonal forecasting, and food security planning. However, varying environmental conditions, diverse cropping systems, and inconsistent spectral signatures often make this task complex. The tutorial will introduce a comprehensive AI-driven workflow that utilizes time-series satellite datasets such as Sentinel-2, Landsat-8/9, and commercial high-resolution imagery. By detecting temporal trends and phenological transitions, the framework will enable automated identification of sowing windows, early growth phases, and harvesting events. Special emphasis will be placed on contextual modeling, smoothing noisy time-series signals, and handling cloud contamination.
2. Implementing an AI-powered machine learning classification approach for accurate estimation of crop acreage using multi-sensor temporal data.
Crop acreage estimation is a critical component of agricultural planning, crop insurance, market forecasting, and policymaking. Conventional pixel-based or rule-based classification techniques often fail when crops exhibit similar spectral reflectance or when fields are highly heterogeneous. The tutorial will demonstrate how advanced machine learning algorithms can leverage multi-temporal and multi-sensor data to improve classification accuracy. Strategies for incorporating SAR data (e.g., Sentinel-1) with optical datasets will be discussed to achieve greater robustness under cloudy or monsoon conditions. Participants will learn how to preprocess multi-source data, extract temporal features, and evaluate classification accuracy for acreage estimation.
3. Integrating machine and deep learning techniques for crop yield estimation using satellite-derived growth metrics and ground-based data.
Yield estimation remains one of the most challenging tasks due to the influence of climate variability, soil properties, and management practices. This section will present methods for integrating vegetation indices. Emphasis will be given to feature engineering, uncertainty quantification, and the development of scalable yield prediction models.
Overall, this tutorial aims to provide participants with a comprehensive understanding of how AI and remote sensing can jointly overcome traditional barriers in crop monitoring, acreage mapping, and yield estimation, ultimately contributing to more accurate and timely agricultural decision-making.
“AutoICE: A Stand-Alone Automated Tool for Glacier Ice-Thickness, Bed Topography, and Volume Estimation”
Navinkumar P J;
Academic Track (Oral)
Glaciers in mountain environments store freshwater on Earth in the form of ice. They remain key contributors to the ongoing sea-level rise, despite representing only a small fraction of the global ice. Accurate information on glacier ice thickness and total ice volume is fundamental to understanding glacier dynamics, predicting their future evolution, and assessing downstream water availability. These datasets also serve as essential inputs for hydrological modelling, hazard assessments, and long-term climate projections. However, producing reliable ice thickness estimates remains challenging, particularly in remote mountain regions where field measurements are scarce and difficult to obtain. Current approaches typically require the integration of diverse remote sensing products, complex glacier models, and extensive manual processing, which limits their reproducibility and scalability.
In practice, existing tools for ice thickness estimation depend on commercial software, lack automation with comprehensive solutions, or require advanced programming skills. Such limitations restrict their accessibility and make it difficult for researchers and practitioners to apply them at regional or basin scales. Therefore, a fully automated, open, and user-friendly software framework capable of processing multiple glaciers is required. The main objective of this study was to address this gap. To this end, AutoICE, an open, stand-alone, automated software tool, was developed and designed specifically for mountain glacier ice thickness estimation.
AutoICE implements the newly developed Velocity-based ICe thickness estimation (VoICE) model, which combines remote-sensing-derived surface velocities, glacier-specific shape factors, temperature-dependent creep parameters, and basal sliding ratios to estimate ice thickness without requiring field observations. These parameters are computed directly from satellite-based datasets, allowing the model to operate efficiently in data-scarce regions. The backend algorithm is written in R for numerical processing, whereas the graphical user interface is developed using PyQt5, offering an intuitive workflow suitable for both scientific users and non-specialists. The tool accepts standard geospatial inputs, including DEMs, glacier outlines, cross sections, surface velocities, and land surface temperature.
A key feature of AutoICE is its ability to generate comprehensive outputs. The tool produces spatially distributed ice thickness estimates, pixel-wise uncertainty estimates, and bed topography derived from the computed ice thickness. In addition, AutoICE automatically generates individual glacier cartographic maps summarizing the spatial patterns of thickness and bed elevation. The tool also extracts non-spatial glacier metrics, including glacier area, mean thickness, total ice volume, and glacier-specific model parameters used in the calculations. These outputs are stored in standardized Excel formats to facilitate comparisons across glaciers and the reproducibility of the modelling workflow.
To evaluate its performance, AutoICE was applied to the Miyar Basin in the western Himalaya using remote-sensing data from 2023. The tool successfully generated ice thickness and bed topography maps for 54 glaciers, estimating a total basin-wide ice volume of 14.20 ± 4.13 km³. Processing times varied from 30 min to a few hours per glacier on a standard computing system, highlighting the computational efficiency and suitability of the tool for large-scale applications. Validation against available in-situ measurements and published thickness values for four glaciers in High Mountain Asia confirmed that the model provides physically consistent and realistic estimates.
The contribution of this study lies in delivering an integrated, automated, and openly accessible framework that brings together multiple remote-sensing datasets, glacier-specific parameterization, and uncertainty evaluation into a single software environment. AutoICE ensures full reproducibility and enables users to replicate or adapt workflows for other regions. Its independence from commercial GIS software, combined with its automated processing chain and comprehensive output suite, significantly reduces the technical barriers for glaciologists, climate researchers, and practitioners working in high-mountain environments.
In summary, AutoICE advances the field of glacier ice thickness modelling by offering a scalable, transparent, and user-friendly tool capable of supporting glacier assessments within a fully automated framework.
“Bridging Municipal Road Data and OSM: The OSM-Validate Validation Tool”
Nishon Tandukar;
Academic Track (Oral)
OpenStreetMap (OSM) is a critical foundational dataset for urban modeling, but its "fitness-for-use" in rapidly urbanizing nations like Nepal is often an unanswered question. As municipalities produce their own high-precision road data, they discover significant gaps in OSM's completeness and accuracy. However, they lack a simple tool to quantify these discrepancies and guide improvement efforts.
Our foundational research in Nepal highlighted the need for such a tool. We found that data discrepancies are often not random, but are clustered in complex spatial patterns or "hotspots." Identifying these priority areas requires a complex spatial analysis that is a major bottleneck for local governments and planners who are not data scientists.
This talk presents OSM-Validate, a new FOSS tool we developed to solve this problem by operationalizing this complex analysis into a practical utility. It empowers any non-expert municipal planner to replicate the study in minutes. A user simply uploads their authoritative road data, and the tool automatically runs the validation, generating an actionable report that identifies clear areas of focus for a mapathon to correct the data. This report includes a "Missing Segments" map (the "hotspots") and an "Error-Prone Segments" map. This session is a walkthrough of the tool, demonstrating how it provides a practical, data-driven bridge between local governments and the OSM community.
“Building Modern Cloud-Native Geospatial Web Applications using Django, Leaflet, and Cloud-Optimized GeoTIFFs (COGs)”
Nilay Nishant;
Workshop Proposals
Introduction
This hands-on workshop introduces participants to building cloud-native geospatial web applications using modern open-source technologies. Participants will learn to develop a complete WebGIS solution that leverages Django as the backend framework, Leaflet for interactive client-side mapping, and Cloud-Optimized GeoTIFFs (COGs) for efficient raster data delivery. The workshop addresses the growing need for scalable, performant geospatial applications that can handle real-time data processing and visualization without relying on traditional, heavyweight GIS servers.
Through practical exercises, attendees will build a functional application capable of loading base maps, overlaying vector and raster layers, performing spatial queries, and presenting results through responsive dashboards. The workshop emphasizes real-world applications in environmental monitoring, disaster management, forestry, agriculture, and smart governance where near real-time geospatial decisions are critical.
Workshop Outline
Part 1: Foundations and Backend Development (90 minutes)
• Introduction to cloud-native geospatial architecture and COG fundamentals
• Setting up Django project structure with GeoDjango extensions
• Building REST APIs with Django REST Framework for geospatial data endpoints
• Introduction to geospatial cloud frameworks (GeoTIFF.js, COG endpoints, TiTiler STAC)
• Streaming COG tiles directly from cloud storage
Part 2: Frontend Development and Visualization (90 minutes)
• Creating interactive maps with Leaflet.js
• Implementing base map layers and custom tile layers
• Building interactive dashboards for data analysis and visualization
• Implementing user interaction features (drawing, querying, filtering)
• Connecting frontend and backend through REST APIs
Pre-requisite Knowledge for Attendees
Participants should have:
• Basic to intermediate Python programming experience
• Familiarity with web development concepts (HTML, CSS, JavaScript)
• Understanding of HTTP requests and REST APIs
• Basic knowledge of relational databases (SQL fundamentals)
• Introductory understanding of GIS concepts (layers, coordinates, projections)
• Experience with command-line interfaces and package managers
No prior experience with Django, Leaflet, or COGs is required, but familiarity with any web framework (Flask, Express, Rails) will be beneficial.
Material Required from Participants
Participants must bring:
• Laptop with administrative privileges for software installation
• Pre-installed software (detailed instructions will be provided one week before):
o Python 3.9 or higher
o Code editor (VS Code recommended)
o Git client
• Stable internet connection for accessing cloud resources and downloading sample datasets
• GitHub account for accessing workshop repository and materials
Sample datasets and starter code will be provided through a GitHub repository accessible before the workshop.
Duration
3 hours (with a 15-minute break midway)
Additional Information
Expected Outcomes: By the end of the workshop, participants will have built a working geospatial web application that they can extend for their own projects. They will understand modern cloud-native geospatial architectures and be equipped to implement scalable solutions using open-source technologies.
Target Audience: GIS professionals, web developers, researchers, and students interested in building modern geospatial applications. The workshop is suitable for those transitioning from traditional desktop GIS to cloud-native web solutions.
Maximum Participants: 30 (to ensure adequate individual attention and support)
“Business Intelligence Meets GIS: Advancing Smart Spatial Analytics with Layers”
Sanket Gondaliya, Rahulkumar K Kanani;
General Track
Business Intelligence (BI) enhances the effectiveness of Geographic Information Systems (GIS) by transforming spatial data into actionable insights. While GIS focuses on mapping and spatial relationships, BI adds analytical power to reveal trends, patterns, and performance indicators. Integrating BI with GIS enables organizations to visualize complex spatial data, understand location-based relationships, and make data-driven decisions across domains such as urban planning, infrastructure, public health, and resource management.
Layers, a Geo-Decision Support System (GeoDSS) developed by Nascent Infotechnologies Pvt. Ltd., brings this integration to practice by empowering users in sectors like urban governance, utilities, and infrastructure to make faster and more informed decisions. To enhance analytical capabilities, Layers is integrated with Apache Superset, an open-source BI platform that allows both technical and non-technical users to create interactive dashboards and visualizations without coding.
This integration operates through a unified workflow where spatial and non-spatial datasets are managed within a shared PostGIS backend, supported by a common user authentication system that allows single sign-on (SSO) access between Layers and Superset. The combined system provides a seamless user experience, allowing the creation of unlimited dashboards and the conversion of raw geospatial data into meaningful business intelligence. It delivers real-time insights, flexibility, and scalability while advancing the use of open-source technologies.
Together, BI and GIS in Layers make geospatial analysis more accessible, intuitive, and impactful—driving smarter governance and sustainable digital transformation.
Keywords:
Business Intelligence (BI), Geographic Information System (GIS), Geo-Decision Support System (GeoDSS), Apache Superset, Spatial Analytics, Data Visualization, Open-Source Integration, Smart Governance.
“Cloud-based Remote Sensing with QGIS and Google Earth Engine Workshop”
Ujaval Gandhi;
Workshop Proposals
This workshop will give you hands-on experience using new Google Earth Engine Plugin for QGIS to combine their desktop-based geospatial workflows with cloud-based datasets.
Google Earth Engine is a cloud-based platform that enables working with large-scale earth observation datasets effectively. The new Google Earth Engine Plugin for QGIS brings this power to the desktop and enables QGIS users to combine their geospatial workflows with cloud-based datasets. The workshop will cover the following topics
- Installing and setting up the Google Earth Engine Plugin for QGIS
- Exploring the Google Earth Engine data catalog
- Downloading images from GEE
- Creating a Processing Model to use data from GEE Data Catalog.
- Creating Maps with QGIS Print Layout
Pre-requisites:
- This workshop requires a Google Earth Engine account. Please follow our step-by-step guide to obtain a free account.
Google Earth Engine is a cloud-based platform that enables working with large-scale earth observation datasets effectively. The new Google Earth Engine Plugin for QGIS brings this power to the desktop and enables QGIS users to combine their geospatial workflows with cloud-based datasets. The workshop will cover the following topics
- Installing and setting up the Google Earth Engine Plugin for QGIS
- Exploring the Google Earth Engine data catalog
- Downloading images from GEE
- Creating a Processing Model to use data from GEE Data Catalog.
Pre-requisites:
Installation and Setting up the Environment
* Install QGIS: This workshop requires QGIS LTR version 3.40 but any recent version will be fine.
* Sign-up for Google Earth Engine: If you already have a Google Earth Engine account, you can skip this step.
* Install the Google Earth Engine Plugin for QGIS: This workshops requires the Google Earth Engine Plugin for QGIS.The plugin can be installed by the Plugin Manager from the official QGIS plugin repository and involves a few extra steps to authenticate with your Google Earth Engine account and set the Google Cloud project. Visit the QGIS Earth Engine Plugin Installation Guide for step-by-step instructions.
“Cloud-Native Geospatial Analysis with DuckDB Spatial”
Vigna Purohit;
Workshop Proposals
This hands-on workshop introduces DuckDB Spatial, a modern in-process analytical database that transforms how we approach geospatial analysis. Built on a columnar architecture optimized for analytical workloads, DuckDB enables fast spatial operations on datasets ranging from thousands to millions of features, all without traditional database setup, configuration, or data import processes.
DuckDB Spatial brings several game-changing capabilities to geospatial workflows. It reads directly from cloud-native formats like GeoParquet, GeoJSON, and Shapefiles. The columnar storage engine delivers exceptional performance for analytical queries, especially spatial aggregations and joins that form the backbone of most geospatial analysis tasks.
The workshop is structured around progressive learning and hands-on practice. Participants begin with fundamentals: installing DuckDB, loading spatial data from multiple formats, and executing basic spatial queries. We then advance to complex operations including spatial joins and spatial aggregations. Each concept is reinforced through practical exercises using real-world datasets representing administrative boundaries and other openly available datasets.
The session will develop a use case where participants build an end-to-end analytical pipeline, demonstrating how DuckDB integrates seamlessly into modern data science workflows. Attendees will learn to combine spatial operations with statistical analysis, handle large-scale datasets efficiently, and export results in various formats for further use in GIS applications or visualization tools.
This approach is particularly valuable for data scientists incorporating spatial analysis into broader analytical workflows, GIS professionals seeking faster exploratory tools, researchers working with heterogeneous data sources, and developers building scalable geospatial data pipelines. DuckDB's lightweight footprint and Python integration make it an excellent complement to existing GIS tools rather than a replacement, fitting naturally into Jupyter notebooks, automated workflows, and cloud environments.
“Data for Survival: Securing Human Life in Asia with the Open-Source Geospatial Shield”
Brazil Singh;
General Track
Asia is currently on the leading edge of global systemic risk, facing a unique convergence of humanitarian challenges: the rapid escalation of climate-driven disasters, accelerated and often informal urbanization, and the continuous threat of large-scale health crises. The efficacy of regional governance, humanitarian aid, and national resilience strategies is fundamentally dependent on one factor: access to timely, accurate, and granular geospatial data.
In times of crisis, proprietary mapping solutions frequently fail. They are often outdated, lack the street-level detail critical for "last-mile" aid delivery in informal settlements, and are governed by licensing restrictions that prohibit the rapid, collaborative data sharing required between government agencies, NGOs, and local communities. This reliance on closed data models poses an unacceptable strategic vulnerability to national security and human well-being across the continent.
This presentation introduces the OpenStreetMap (OSM) Foundation’s ecosystem not merely as a technical project, but as a proven, decentralized, and highly effective Open-Source Geospatial Shield against these escalating threats. OSM is the world's largest, most detailed, and continuously updated map, built by a global community and local experts, making it uniquely suited to the dynamic and diverse environments of Asia. Our argument is a strategic one: an investment in OpenStreetMap is the most cost-effective and ethically sound measure available for proactive regional security and humanitarian response.
The Three Pillars of the Geospatial Shield
- The Preventative Layer: Data for Proactive Mitigation
To manage risk, one must first map it. Commercial data often stops at major roads, but the greatest vulnerability in Asia resides in its unmapped urban pockets, coastal floodplains, and remote islands. OSM’s community-driven methodology ensures the mapping of essential, high-vulnerability details: the precise boundaries of informal settlements, specific road widths necessary for emergency vehicle access, the location of drainage systems, and the true footprint of critical infrastructure like hospitals and schools.
This granular data is indispensable for Climate Adaptation Modeling. It allows urban planners to conduct high-resolution flood risk analyses based on actual building elevations, rather than generalized models. It enables public health officials to accurately track disease vectors and plan equitable resource allocation based on true population density, securing preparedness against the next pandemic. OSM provides the essential, high-fidelity input needed to transition from reactive crisis management to proactive, data-informed mitigation.
- The Response Layer: Velocity for Life-Saving Operations
When minutes count, the velocity of open data is a life-saving advantage. Following a major disaster, humanitarian organizations, aid agencies, and military logistics units require immediate, actionable data to perform complex Zero-Hour Logistics. The OSM ecosystem, powered by the Humanitarian OpenStreetMap Team (HOT), activates thousands of remote volunteers to map damage from satellite imagery within hours, creating the only up-to-the-minute map of passable routes, blocked access points, and damaged infrastructure.
This real-time digital intelligence shortens response times, maximizes the efficiency of aid distribution, and ensures aid workers are not relying on a map that is 18 months out of date. Furthermore, because the data is open, it eliminates bureaucratic bottlenecks and licensing disputes, allowing every organization, from national civil defense to local community groups, to operate from a single, unified, and trusted source of truth.
- The Strategic Layer: Open Data Sovereignty and Trust
Ultimately, the utilization of OpenStreetMap serves a profound strategic purpose: achieving Geospatial Data Sovereignty. By fostering strong national and regional OSM communities, Asian nations can reduce their reliance on foreign commercial entities for mission-critical infrastructure data. This guarantees that life-saving information is never subject to geopolitical restrictions, commercial blackouts, or exorbitant licensing fees.
Investing in OSM is investing in a shared digital public good. It drives regional collaboration by establishing a common data language for cross-border challenges, such as tracking transboundary haze, coordinating responses to shared coastlines, and ensuring seamless transport logistics. We urge strategic leaders to recognize the OpenStreetMap Foundation as the vital, secure, scalable, and morally imperative partner for building the continent's permanent, open-source geospatial shield.
This abstract serves as a critical call to action, framing the adoption of open geospatial standards as a fundamental step toward securing a resilient and stable human future across Asia.
“Decision support tool for Impact Investing: Use-case/pilot with Department of Agriculture, Government of Goa leveraging FOSS & integration potential with Geospatial data”
Rishika Jerath;
Academic Track (Oral)
Many decision-makers seek to invest in innovations that create positive social and environmental outcomes. However, there is currently no reliable, standardized method for evaluating the processes or impacts of such investments. As a result, impact investments often rely on subjective assumptions and struggle to capture the complexity of the variables involved. This challenge affects approximately USD 1.57 trillion in investments (Global Impact Investing Network [GIIN], 2024) made in the impact sector that lack adequate measurement systems. Applications leveraging Free and Open Source Solutions for Geoinformatics (FOSS4G), integrated datasets, and data stacking offer significant potential to address this fragmented decision-making landscape, particularly in the fields of agriculture and energy.
Across the agricultural value chain – from seed and soil conditions to final crop yields – numerous variables influence productivity. Machine learning (ML) tools can help identify the primary factors affecting output, leading to improved advisories to farmers and enabling more targeted implementation of schemes (for example, interventions in flood-prone areas). To accurately track the outcomes of funding in agriculture and energy, it is essential to capture information across the entire value chain and develop frameworks that clarify the core drivers of success. Data stacking that integrates ML insights with soil and topographical data can transform the quality of information available at both the farm and policy levels.
When investing in sustainable agriculture programs, decisions must be made about which areas or jurisdictions to expand the program to next. The success criteria of the program are mapped according to the goals, for example, the triple bottom line. A framework is needed to map variables and data to answer questions on which jurisdiction area is viable for expansion to achieve the triple bottom line. Across Asia, diversity is woven into its geographies, no matter the scale of the test-bed in question. Another application of the ML feature is leveraging Principal Component Analysis to build these decision-making frameworks and aid the data collection and collation process. There are several instances of conflicting data from distinct public sources, which only adds to the complexity of these exercises. The feature would help both collate the data and offer the framework and analysis to aid the decision-making process.
Decision-making in impact investing is inefficient, often relying on a lengthy, iterative process grounded in unverified assumptions. GRANULENS addresses this challenge by offering an online tool that moves directly from data to insight. Through a concise three-step process, it distills complex datasets into clear, visualized insights that highlight the factors driving defined measures of success or risk. The platform provides customizable analytical features for agriculture and energy and is designed to adapt to the user’s level of expertise, rather than requiring users to adapt to the tool. Its central aim is to deliver predictive analytics and actionable insights in an intuitive manner.
As data becomes increasingly central to decision-making, the volume available to leaders can be overwhelming. To navigate this landscape effectively, decision-makers require tools that help interpret data, identify implementation pathways, and clarify where progress is being made. Data holds value when it generates insight, and GRANULENS provides that insight along with direction. It helps determine whether the variables at play are genuinely informing strategic objectives and whether efforts are meaningfully shifting key outcomes.
Optimizing data systems to understand the drivers behind critical metrics is essential for guiding organizational resources and efforts. Emerging digital tools are being designed to scale beyond highly technical users, enabling non-experts to interact with and benefit from advanced models. GRANULENS leverages domain expertise and sector-specific knowledge to translate data into decisions that continually improve over time.
These questions mirror the challenges faced by policymakers, analysts, and organizational leaders more broadly. For example, the Directorate of Agriculture of the Government of Goa collects crop data annually, yet continues to ask: What is driving rice productivity in the state? They seek insights without the burden of costly software or tools that require extensive technical training.
We have developed a Minimum Viable Product (MVP) incorporating the machine learning feature, which is currently being tested with the Department of Agriculture in Goa. Although the department collects crop data each season, the factors driving rice and crop productivity remain unclear. We contributed analytical insights and enhancements to their existing data framework to explore this question for the current rice cultivation season. The ML component was developed and tested using datasets from agricultural research conducted at the International Maize and Wheat Improvement Center (CIMMYT), baseline studies completed at the Environmental Defense Fund, and data from a fintech institution assessing a failed loan program. Development of the MVP was supported by a stipend from Georgetown Entrepreneurship’s Summer Launch Incubator.
“DEEP LEARNING BASED MODELLING OF WHEAT YIELD OF SELECTED DISTRICTS OF HARYANA WITH THE SYNERGISTIC USE OF OPTICAL AND SAR DATA”
Sanjivani Srivastava, Hari Shanker Srivastava, Durba Das, Barnana Das;
Academic Track (Oral)
India is the world’s second-largest wheat producer. The stability of wheat yield is inextricably tied to achieving SDG 2 (Zero Hunger) and SDG 1 (No Poverty) by ensuring access to affordable food and protecting rural livelihoods. Thus, variations in wheat production caused by environmental factors influence farm revenues and the national trade balance, with significant implications for both national and global food security.
Early and accurate crop yield estimation is crucial for planning food procurement, controlling inflation, and safeguarding farmers’ welfare. Satellite data is vital for data-driven policymaking and climate risk management, as it helps measure crop phenology, biomass accumulation, and plant stress. When combined with deep learning algorithms, satellite data can provide reliable early-season yield predictions by identifying complex, nonlinear relationships between spectral/radar indices and harvested biomass.
Therefore, the study aimed to develop robust deep learning models that leverage remote sensing derived features to provide district-level yield estimates. To achieve this, the first challenge was creating an effective mask for wheat-growing regions within the study area to enhance classification accuracy. The next task was to understand how temporal trends in key factors, such as vegetation health, land surface temperature, rainfall, and urban expansion, influenced crop area availability and yield. Lastly, a study of long-term trends in wheat production was required to gain insight into phenological and productivity patterns across the districts, driven by climatic variability.
Five districts of Haryana, namely Fatehabad, Jind, Kaithal, Kurukshetra, and Sirsa, were selected as the study area. The research utilized multi-sensor, multi-satellite temporal data to derive various remote sensing indices and gather environmental information. For instance, Sentinel-1 C-band (VV, VH) SAR data were used to generate the Radar Vegetation Index (RVI). Data from multiple spectral bands of Sentinel-2, namely Band 3 (Green), Band 4 (Red), Band 8 (NIR), and Band 11 (SWIR), were employed to produce FCCs, Agriculture RGB composite, NDVI, and NDBI. Land Surface Temperature and Emissivity data were obtained from NASA’s MODIS MOD11A2 Version 6 product, while precipitation data were sourced from the CHIRPS. Additionally, the Directorate of Economics and Statistics records were used to obtain ground-truth yield data.
The analysis was conducted for a period ranging from 2017 to 2023. Google Earth Engine served as the primary platform for acquiring, processing, and analysing satellite data. QGIS was instrumental for manual verification, raster inspection, and generation of spatial masks. For deep learning-based yield prediction, Python was the core environment. Libraries such as GDAL, Rasterio, and NumPy were used for batch processing of TIFF layers, spatial alignment, format conversion, and array operations. Matplotlib and Seaborn were utilized for result visualization, including district-wise yield comparison plots and model error charts.
First satellite-derived geospatial indicators were extracted for the entire wheat-growing season (November to April). Decision rules were applied to create a binary wheat mask by thresholding monthly RVI and NDVI composites across key growth stages: tillering, stem elongation, booting, anthesis, grain filling, and maturity. NDBI was used to exclude urban areas. Image stacks of satellite bands, each representing a variable or month, were prepared and stacked annually to form multi-channel images. Resulting images served as inputs for the CNN model. The model, trained with backpropagation and Adam optimizer, was found performing well on training data but struggled to generalize across years. Therefore, to improve robustness, cross-validation and data augmentation techniques, including spectral noise injection and spatial shifting, were employed.
The findings revealed that the temporal NDVI signature permitted intra-seasonal variability quantification, facilitating early detection of yield anomaly resulting from climatic stressors. Spatial prediction maps generated by the CNN model, when overlaid on administrative boundaries, enabled clear visualization of inter- and intra-district yield variations. Using gradient-based colour legends, these maps were classified into low-, medium-, and high-yield zones, providing actionable insights for regional yield monitoring. Fatehabad and Kurukshetra recorded the highest yield estimates (~5.0–5.2 ton/ha) and showed minimal variation against archival data. Contrarily, Sirsa had the lowest predicted yield. Overall, model predictions closely aligned with historical records, yielding an R2 of 0.989, an MSE of 0.00182, and an RSME of 0.05266, indicating the robustness of the CNN model in capturing spatiotemporal features and patterns in satellite imagery.
The study highlights that (i) integrating spectral and radar indices with agro-environmental layers significantly enhances the accuracy of yield prediction; (ii) optical satellite data and vegetation indices act as reliable proxy indicators for understanding phenological changes and forecasting crop yield across extensive agricultural landscapes; (iii) deep learning techniques are well-suited for district-level yield estimation, indicating the potential to combine multisource remote sensing data with advanced machine learning algorithms to enhance the accuracy and timeliness of wheat yield estimates in Indian agriculture.
“Development of an IoT and GIS-Based Waste Collection Vehicle Tracking System for Supporting Route Planning and Operational Efficiency”
sittichai choosumrong;
General Track
This study aims to design and develop a real-time waste collection vehicle tracking system using Geo-IoT technology to support route planning and enhance operational efficiency. The system integrates GPS with internet-based communication and WebGIS to continuously display the location, route, speed, and operational status of garbage trucks. It also records the duration of stops at each collection point to estimate relative waste volume along each route. These data do not automatically optimize the route but serve as a foundation for better planning and improvement of future collection strategies.
In addition, the system records vehicle operating time throughout the working day and includes a speed alert function to notify municipal officers when a truck exceeds the designated speed limit, which may lead to waste spillage or unsafe driving behavior. All data are stored in a spatial database and visualized through an online WebGIS platform, enabling clear historical route review, performance monitoring, and transparent decision-making.
The developed system helps reduce manual recording tasks, supports data-driven route planning, improves service transparency, and increases accountability in municipal waste management. Although route optimization algorithms were not implemented in this study, the system establishes essential baseline data and provides a scalable framework for future integration of intelligent routing, fuel consumption analysis, workload distribution, and predictive waste collection models.
“Drone Data Processing and GIS Integration using Open-Source Tools Workshop”
Kiran Bhamblani;
Workshop Proposals
Elevate Your Skills: The 2-Day Drone & Open-Source GIS Mapping Workshop
Join us for the Drone and GIS Mapping Workshop, a comprehensive 2-day program meticulously crafted to transition participants from foundational concepts to real-world geospatial mastery. This isn't just a course; it's a launchpad into the thriving and sustainable world of free and open-source GIS (FOSGIS), specifically tailored for the booming drone industry in India.
Building a Community, Not Just Connections
This workshop is designed to cultivate a vibrant community of like-minded individuals—from freshers eager to break into the industry to seasoned professionals looking to future-proof their skills. Our goal is to collectively prosper and prepare for the opportunities in the FOSGIS sector.
Sadly, despite the immense growth of the drone industry in India, its potential often remains untapped, largely confined to photography and entertainment. We are dedicated to broadening the scope of UAV surveys, demonstrating powerful, professional applications that drive real-world impact. By focusing on open-source solutions, we aim to unlock better, more accessible opportunities for freelancers and organizations alike, fostering a new generation of skilled geospatial experts.
Mastering the Full Geospatial Workflow
Over five intensive days, you will delve deep into the complete geospatial workflow:
Foundational Knowledge: The course begins with a strong grounding in GIS and remote sensing concepts, ensuring everyone has a solid theoretical base.
Hands-on Drone Data Processing: You will receive practical, hands-on training in processing drone data. Learn to transform raw aerial imagery into professional-grade deliverables, including orthomosaics and detailed terrain models, using powerful open-source tools like WebODM and CloudCompare.
Practical Application in QGIS: Participants will engage in extensive practical exercises using QGIS, the world's leading open-source desktop GIS software, culminating in the completion of a full, practical geospatial project.
Accessibility, Sustainability, and Real-World Impact
The workshop emphasizes accessibility and sustainability by focusing exclusively on open-source technologies. This approach removes the barrier of expensive proprietary software, making high-level geospatial skills accessible to everyone and ensuring the knowledge gained is practical for immediate implementation.
The skills you acquire are highly transferable and essential for a variety of critical applications, including environmental monitoring, disaster management, precision agriculture, and infrastructure planning. By the end of the program, you will be equipped to confidently leverage drone technology and open-source GIS to solve complex challenges and drive innovation in the Indian market and beyond.
Join us to elevate your career and contribute to the growth of a truly impactful, professional drone and GIS ecosystem.
“Enhancing PyGILE Ecosystem for High-Performance Computing”
Bhuwan Awasthi, Sarawut Ninsawat, Venkatesh Raghavan;
Academic Track (Oral)
Python GeoInformatics Lab Environment (PyGILE), a pre-configured conda environment that bundles Jupyter notebooks is a ready-to-go solution to sidestep installation complexity, cross-platform consistency, and native library compilation hurdles for researchers and students. In addition, PyGILE has been validated as a preferred choice of geospatial environment for applications that include but are not limited to slope stability, hazard analysis and hydro-climatological studies.
PyGILE Plus is a containerized multi-platform upgrade of PyGILE that combines SAGA GIS, GRASS GIS, Whitebox Tools, and Orfeo ToolBox along with machine learning libraries and parallel computing frameworks in a single docker container, accessible in Jupyter through both Python and Command Line Interfaces (CLI) as an alternative to existing GIS- GUI centric frameworks.
Containerization of PyGILE Plus, that runs headless (without a Graphical User Interface), addresses the universal issue of computational scalability with respect to geospatial analysis. The richness of analysis enabled by Jupyter Lab integration, allows it to be deployed on a variety of computational infrastructures ranging from personal workstations to servers in the cloud, HPC clusters and container orchestration platforms. Building on this PyGILE-Plus framework, future work can directly interface with HPC architectures and parallelize execution of geospatial workflows, whether it be deploying the Singularity/Apptainer container to GPU-accelerated HPC clusters and perform deep learning inference on large satellite time series datasets, running parallel geospatial processing in HPC job arrays or a geospatial microservices architecture in a container orchestration setup like Kubernetes or Docker Swarm on cloud-HPC hybrid infrastructure.
For deployment to HPC clusters, the available Docker images are converted into the Singularity/Apptainer format (.sif files), allowing the unprivileged containers to run without root privileges. Apptainer directly pulls and converts Docker images from container registries like Docker Hub to HPC compatible .sif files (Singularity images). The ability to manage big data at a regional and global scale, such as land cover classification, climate model downscaling, DEM fusion and regional scale hydrological modeling, is one critical frontier in geospatial analysis. Singularity/Apptainer containerized on an HPC system brings with it many advantages over a traditional single-node HPC environment: portable single-file container images (SIF format instead of Docker format), HPC scheduler integration (SLURM, PBS), ability to submit it as a batch job, MPI and native GPU passthrough to run on thousands of cores in parallel and ability to bind-mount HPC filesystems (/scratch, /projects) directly into the container environment without modification.
In summary, the PyGILE environment aims to compress the time to operations, all while retaining the scientific reproducibility and computational transparency required to meet the ideals of open science.
Keywords: High-Performance Computing, Geospatial Analysis, Singularity/Apptainer Containerization, Docker Containerization
“Enhancing the Layers Platform with High-Performance 3D Visualization Using Mago3D and CesiumJS”
Shreya Bobde;
General Track
The rapid expansion of 3D geospatial data—driven by nationwide 3D mapping initiatives, high-resolution LiDAR, UAV photogrammetry, and BIM workflows—has created a strong need for high-performance, browser-based 3D platforms that can deliver real-time visualization and analysis. Sectors such as urban governance, infrastructure development, utilities, and digital twin programs increasingly depend on 3D environments that integrate smoothly with existing GIS systems while efficiently rendering large and complex datasets.
To meet this demand, Nascent Infotechnologies Pvt. Ltd. has enhanced its geospatial decision-support platform, Layers, with scalable and enterprise-ready 3D capabilities. Built on a solid OGC-compliant 2D foundation, Layers now incorporates Mago3D, an open-source WebGL 3D engine developed by Gaia3D, South Korea, enabling rapid ingestion and rendering of large 3D models, terrain data, and 3D Tiles. Through Mago3D, Layers supports a wide variety of 3D formats and transforms GIS, DEM, and LiDAR datasets into optimized 3D Tiles and high-resolution terrain for smooth, large-scale visualization.
A seamless CesiumJS-based 3D viewer further enhances the platform with intuitive navigation and essential analytical capabilities. These include Line-of-Sight and Viewshed Analysis for telecom and landscape planning, precise 3D measurements for engineering and land-use applications, dynamic Slice Analysis for inspecting complex 3D models, and an attribute-driven Query Module for rapid information retrieval during emergencies or environmental assessments.
By combining CesiumJS rendering with Mago3D’s efficient terrain and tiling engine, Layers delivers an interactive 3D Digital Twin environment that empowers data-driven decision-making at city and regional scale. This integrated 2D–3D framework strengthens the accessibility, interoperability, and readiness of advanced 3D GIS for smart city development and next-generation urban transformation.
Keywords
3D GIS, Digital Twin, Mago3D, CesiumJS, 3D Tiles, LiDAR, UAV Photogrammetry, WebGL, Urban Planning, Terrain Visualization, Smart Cities, Nascent Infotechnologies, Gaia3D
“EOxElements: Your building blocks for Geospatial UI Development (with OpenLayers)”
Srijit S Madhavan;
Workshop Proposals
In this hands-on workshop, participants will use the open source EOxElements (https://github.com/EOX-A/EOxElements) to rapidly build geospatial user interfaces. The session is designed to provide both theoretical understanding and practical experience, enabling attendees to build fully functional geospatial dashboards by the end of the workshop.
In particular, the objectives of the workshop are:
- To introduce the building blocks and features of EOxElements
- Hands-on experience in building your own Geospatial Dashboard at the end (with minimal code).
- The workshop will also showcase how EOxElements can be seamlessly integrated with multiple frameworks such as React, Vue, Svelte, and Vanilla JavaScript, demonstrating its flexibility and accessibility for developers across different ecosystems.
- To update on the development of the EOxElements open-source library
- To enable participants to become contributors to the project
- To demonstrate the design and development of interactive user interfaces showcasing EO Data, including maps, charts, filters, layer controls, etc.
The workshop is self-contained, and no preparation is required from the participants. Installation of a code editor and a Node.js environment is recommended but not required.
A basic level of knowledge is expected in the following fields:
- Earth Observation
- HTML, CSS & JavaScript
Overall, this workshop provides a practical gateway to building powerful, interactive EO data interfaces and engaging with an active open-source community.
“Evaluating Shoreline Prediction Models Using Satellite Imagery and Open-Source Tools”
Job Thomas;
Academic Track (Oral)
Change in shoreline, driven by human activities near coastal areas and natural forces, impact coastal ecosystems, human settlements and economic activities significantly. Therefore, accurately predicting shoreline dynamics and monitoring is essential for effective coastal management, disaster preparedness and hazard mitigation. The traditional methods used for calculating shoreline change are often time-consuming, expensive, and temporally unreliable. This study integrates open-source tools, CoastSat and AMBUR (Analyzing Moving Boundaries Using R), to streamline shoreline extraction, tidal correction, and future shoreline predictions. The research study area is Mangalore coast, India, to extract the shorelines using satellite imagery and perform tidal corrections and then extrapolate future shorelines. The data used for this study are from 26 years, 1989 to 2015 with 9 shorelines. These Shorelines were extracted from optical satellites, where beach slope and shoreline elevation were calculated using DEM, and tidal level was predicted using FES2022 global tide model. The CoastSat extract shorelines and perform tidal corrections, achieving around 10-meter accuracy. It processes multispectral images from Sentinel-2 and Landsat via Google Earth Engine, applying cloud masking and pan-sharpening. With the help of MNDWI, and Otsu's threshold, it segments images into land and water pixels and extracting shorelines with the Marching Squares Algorithm. DEM used in this study is CARTOSAT–1, 2.5m resolution. Vertical transformation and error correction was performed on the DEM before using it for calculating beach slope and shoreline elevation. The AMBUR toolkit will calculate shoreline change rates and predict future shorelines. It starts from collecting baselines along with shorelines, then generates transects at regular intervals. For this study 5 different transect intervals of 1 m, 25 m, 50 m, 75 m, and 100 m will be used along the shoreline. These transects help measure shoreline positions over time. Shoreline positions are analysed by calculating intersection points and measuring distances from the baseline. There are three statistical methods used in this study, End Point Rate (EPR), Linear Regression Rate (LRR), and Weighted Linear Regression (WLR) are used to calculate change rates, and these will be used to predict future shorelines along with transect azimuth, offshore correction values, latest shoreline, and the forecast period for which the future shoreline position needs to be predicted. The study will focus on long-term shoreline analysis from 1989 to 2015 and short-term shoreline analysis spanning 2015 to 2020 is currently ongoing and aims to refine predictions and capture recent dynamic changes.
“Faster Maps, Smaller MapLibre Vector Tiles, and What’s Next”
Frank Elsinga;
General Track
MapLibre stands for open innovation in vector-based map rendering, enabling developers and organizations to build fast, flexible, and future-proof mapping applications. In this talk, Frank Elsinga, member of the MapLibre Governing Board and maintainer for maplibre/martin, present how the MapLibre ecosystem continues to drive new ideas and technological progress across platforms.
We will give an accessible overview of the latest developments in both MapLibre GL JS and MapLibre Native, explore improvements in performance and usability, and highlight new tools that make style creation and map customization easier than ever. A special focus will be placed on the emerging MapLibre Tile Format, a next-generation alternative to the Mapbox Vector Tile format. We will explain the high level why the community is developing this new standard, how it works, and what benefits it brings for implementers, data providers, and end users.
For more details on MLT, please visit our lower level talk.
The session is suitable for both newcomers and experienced practitioners. Attendees will gain a clear understanding of current trends in open mapping technology and an outlook on where the ecosystem is heading in the coming years.
We will also showcase real-world examples from recent community projects.
“Flood Simulation and Visualization using Open Source Geospatial Tools”
Girishchandra Y;
Workshop Proposals
Floods remain among the most devastating natural hazards, causing significant damage to infrastructure, ecosystems, and human lives. The growing availability of open geospatial datasets and computational tools offers new opportunities for scientists and practitioners to simulate, visualize, and communicate flood risks efficiently. However, building an end-to-end, automated system for flood modeling and visualization remains a challenge due to fragmented workflows, closed-source dependencies, and limited integration between hydrodynamic models and visualization tools.
This hands-on workshop bridges these gaps by presenting a complete open-source framework for Automating Flood Simulation and Visualization using Open Geospatial Tools. Participants will learn how to integrate and operationalize tools such as ANUGA Hydro, GDAL, QGIS, and OpenLayers within a unified Python-based workflow. The workshop is structured into four modules:
Data Preparation and Preprocessing:
The session begins with acquiring and processing freely available elevation and hydrological data using GDAL and QGIS. Participants will learn techniques for cleaning DEMs, defining catchment boundaries, and preparing input rasters for hydrodynamic simulation.
Hydrodynamic Simulation using ANUGA:
Next, we introduce ANUGA Hydro, an open-source finite volume model for simulating shallow water flows. Participants will set up domain boundaries, assign boundary conditions, and run simulations to model flood propagation over complex terrain. The focus will be on automating these steps through Python scripts to eliminate manual intervention.
Post-processing and Data Assimilation:
Participants will explore how to process the model outputs—such as water depth, velocity, and inundation extent—into usable GeoTIFF or NetCDF layers. The session also demonstrates how to integrate satellite or sensor-based real-time data for calibration or assimilation.
Visualization and Web Deployment:
The final part of the workshop focuses on visualizing results interactively using OpenLayers and Django REST Framework. Participants will learn to publish dynamic maps that display flood extents and hydrographs, enabling intuitive communication of model outputs for decision-makers and local authorities.
Beyond the technical aspects, the workshop highlights the importance of open science, reproducibility, and transparency in environmental modeling. Participants will understand how combining open datasets (e.g., from Copernicus or NASA SRTM) with free tools enables scalable and adaptable workflows that can be implemented in various regions without licensing barriers.
The session will also touch upon scaling simulations using High-Performance Computing (HPC) environments or cloud resources to handle large domains and high-resolution datasets efficiently. Attendees will gain insights into parallelizing ANUGA workflows and managing job submissions using open-source schedulers.
By the end of this workshop, participants will:
Understand the complete open-source pipeline for flood modeling.
Learn how to preprocess DEMs, define flood domains, and simulate flow dynamics.
Generate and visualize outputs such as inundation maps and hydrographs.
Deploy a web-based flood visualization dashboard for public or institutional use.
Gain knowledge about integrating HPC or cloud systems for scalable flood modeling.
This workshop is ideal for hydrologists, GIS professionals, environmental scientists, and developers aiming to build accessible, transparent, and reproducible flood forecasting systems. Participants will leave with working scripts, sample datasets, and a deployable prototype that can be adapted for real-world applications in flood monitoring, early warning, and decision support.
“FOSS4G Hiroshima 2026 Is All You Need!”
Nobusuke Iwasaki, Kenya Tamura;
General Track
FOSS4G is finally coming back to Asia!
Under the shared vision of "Empowering everyone with open source geospatial," FOSS4G has grown into the world's largest conferences on open-source geospatial technology. For the first time in 11 years since FOSS4G Seoul 2015, the Global FOSS4G returns to Asia. The year 2026 will also mark a historic milestone — the 20th anniversary of the founding of the OSGeo Foundation. In this special year, the OSGeo and FOSS4G communities will gather once again — this time in Hiroshima, Japan.
This presentation will share all the highlights of FOSS4G Hiroshima 2026. First, we'll explore the multifaceted charm of Hiroshima — a city that continues to spread a message of peace to the world. Beyond its profound history, Hiroshima offers stunning natural beauty from the islands of the Seto Inland Sea to its verdant mountainous regions. Visitors can experience iconic cultural treasures like the UNESCO World Heritage site Itsukushima Shrine with its floating torii gate, witness traditional performing arts such as Kagura (sacred Shinto theatrical dance), savor world-renowned Hiroshima-style okonomiyaki, and enjoy the warm culture of hospitality. Hiroshima is the ideal place where cutting-edge technology and deep humanity meet in harmony.
Next, we'll reflect on the profound meaning of our theme: "Bridging Geospatial Technology and Humanity." Geospatial technology is an essential tool for addressing critical global challenges including urban planning, disaster risk reduction, climate change adaptation, environmental conservation, and humanitarian support. Discussing these technologies in Hiroshima — a city that has transformed tragedy into a powerful symbol of peace and resilience — gives our theme special resonance and urgency.
And of course, we'll highlight the many attractions of the conference itself: cutting-edge technical sessions featuring the latest innovations in open-source geospatial tools, hands-on workshops where you can develop new skills, inspiring keynotes from industry leaders, and above all, invaluable opportunities to connect with friends, collaborators, and mentors from around the world. New ideas will emerge, collaborations will begin, partnerships will form, and lifelong friendships will grow — that is the true spirit of FOSS4G.
With its 20th anniversary celebration, its long-awaited return to Asia, and its powerful message of peace and sustainability, FOSS4G Hiroshima 2026 promises to be a truly unforgettable event that will shape the future of our community.
We warmly invite you to join us — whether you want to learn the latest technologies, expand your global network, or start a new project, FOSS4G Hiroshima 2026 Is All You Need!
See you in Hiroshima, in 2026!
“From Maps to Meaning: AI-Driven Geospatial Insights with Layers”
Santosh Gaikwad, Bhautik Aghera;
General Track
The rapid advancement of Artificial Intelligence (AI) and Large Language Models (LLMs) is transforming the field of Geographic Information Systems (GIS), leading to a new era of intelligent, automated, and user-centric spatial analytics. In response to this global trend, Nascent has developed Layers – a Geospatial Decision Support System (Geo-DSS) that integrates AI to enhance geospatial understanding and decision-making. Layers enable users to execute spatial, non-spatial, and raster queries within a single, unified analytical environment.
While traditional GIS platforms focus on data processing and visualization, they often depend heavily on user expertise for interpretation. To address this challenge, Nascent has embedded Generative AI capabilities within Layers’ query module, utilizing multiple LLMs such as Llama-3 (open-source), OpenAI, and Gemini. This integration automates the interpretation of geospatial outputs and provides intelligent summaries of analytical results.
Additionally, GeoGenie, an AI-powered assistant, has been developed to understand user queries in natural language and generate context-aware insights from spatial datasets. This feature removes the need for manual query formulation, allowing users to interact with the system effortlessly through text or voice-based prompts.
The integration of AI and GIS in Layers represents a major step toward AI-augmented geospatial decision-making, where human intuition and machine intelligence work together to enhance situational awareness, uncover complex spatial relationships, and accelerate data-driven planning. This innovation underscores the growing importance of AI within the open-source geospatial ecosystem, advancing the future of intelligent and accessible geospatial computing.
Keywords: Artificial Intelligence, Generative AI, Large Language Models, GIS, Geo-DSS, Geospatial Analytics, Natural Language Interface, AI-Augmented Decision Support, FOSS4G, GeoGenie, Layers, Llama-3.
“Geo-Guru: Mobile-based Hands-on Geospatial Learning for School Students”
Sanjay Saifi;
General Track
The increasing integration of geospatial technologies into diverse sectors including urban planning, environmental management, disaster response, and agriculture, demands a geospatially aware human resource. However, exposure to these technologies often begin only at higher education levels, leaving a critical gap in early conceptual understanding. Traditional school curricula in India rely on static textbook maps, limiting students’ ability to think spatially and relate geography to real-world
applications. This gap creates a barrier in developing early geospatial skills and limits awareness of careers and applications in this growing field. To address this challenge, we developed Geo-Guru, a mobile-based learning application designed to introduce fundamental geospatial concepts to school students through simple, activity-based interaction on Mobile device.
Geo-Guru is a Mobile-based educational application designed to introduce school students to the fundamentals of Geospatial Technology through interactive learning. The app transforms traditional map-reading exercises into hands-on experiences where students create, analyze, and interpret spatial data. By allowing students to add points, draw lines and polygons, measure distances and areas, and assign thematic attributes, Geo-Guru enables active learning of core GIS concepts. The app also includes built-in learning modules on GIS and Remote Sensing, supported by practical exercises and an Activity Box for self-assessment. It aims to connect textbook learning with real-world spatial understanding, promoting geospatial awareness among students.
The app was developed using the Flutter framework, allowing it to run smoothly on Android devices commonly available in schools and households. The Google Maps API provides the base map for visualizing real geographic locations. Core functionalities include drawing and editing vector features, adding attributes such as name and type, visualizing raster maps, and exporting user-created maps. The interface is designed to be simple and student-friendly, requiring no prior GIS experience.
To evaluate the usability and learning impact of Geo-Guru, a pilot demonstration was conducted with Class 10 students in a school environment. Feedback from both students and teachers indicated increased interest in geospatial learning.
One notable aspect of Geo-Guru is that it represents an indigenous, Made-in-India effort to expand geospatial awareness at the foundational level. The app aligns with the National Geospatial Policy 2022, which highlights the importance of building geospatial skills, encouraging innovation, and promoting wider adoption of geospatial technologies. By providing a low-cost, mobile-based learning platform, Geo-Guru supports early exposure to spatial thinking and digital tools, helping prepare students for future engagement with more advanced open-source geospatial technologies.
Looking ahead, Geo-Guru has strong potential for scaling and integration. Future work includes adding regional language support, aligning exercises with national curriculum standards, expanding the content library, and creating a web-based version that can be used in computer labs. Collaboration with schools, educators, and geospatial communities can help refine the app, introduce new features, and encourage broader adoption.
“GIS-Enabled ERP for PCMC: Integrating Spatial Intelligence with Municipal Operations through CityLayers”
Prashant Persai;
General Track
Rapid urban expansion requires municipal administrations to adopt integrated digital systems capable of managing spatial and operational data with high efficiency. Addressing this need, Nascent Infotechnologies Pvt. Ltd. has implemented a GIS-enabled Enterprise Resource Planning (ERP) system for the Pimpri Chinchwad Municipal Corporation (PCMC), built on its geospatial decision-support platform, CityLayers. The solution merges map-based spatial information with core municipal functions, creating a unified environment for managing workflows, assets, and services with greater clarity and precision. By providing a geospatial representation of ERP data, the system enhances operational efficiency through improved asset management, streamlined processes, and more informed, location-aware decision-making.
A major component of the implementation is its deep integration across enterprise systems, including SAP-based core ERP modules, non-core municipal applications, and a centralized Document Management System (DMS). These integrations enable automated data exchange, reduce duplication, and ensure process continuity across departments. The platform also introduces Single Sign-On (SSO), offering secure and seamless access to all connected systems through a unified authentication layer, thereby improving both user experience and security.
CityLayers serves as the core geospatial engine, managing enterprise geospatial layers within a PostGIS-backed datastore. It provides advanced spatial capabilities such as topology validation, spatial joins, network tracing, and map-centric querying. Spatially enriched datasets are delivered through interactive dashboards and analytical modules built using FOSS technologies, ensuring flexibility and vendor independence. The platform supports automated event propagation, allowing processes such as asset condition updates, work-order routing, and service-request triaging to be triggered dynamically based on spatial rules and business logic. Its distributed architecture ensures high availability, modular scalability, and secure inter-service communication. Integration with SAP and legacy applications establishes a single source of truth while leveraging GIS for contextual intelligence.
By combining CityLayers' geospatial engine with ERP systems, DMS, and SSO, PCMC now operates on a unified, interoperable digital infrastructure. This architecture offers a scalable blueprint for other urban local bodies aiming to modernize governance through FOSS-based geospatial technologies.
Keywords:
Geospatial ERP, CityLayers, Municipal Governance, PostGIS, Spatial Data Integration, SAP Integration, Document Management System, Single Sign-On (SSO), Urban Digital Infrastructure, FOSS Technologies, Spatial Analytics, Smart City Solutions, Enterprise System Integration, Location-Aware Decision-Making, GIS-enabled Workflows, Municipal Asset Management.
“Hands-on usage of CoRE stack datasets and APIs for social-ecological planning”
CoRE stack, Kapil Dadheech, Aman Verma, Ankit Kumar, Nirzaree;
Workshop Proposals
The CoRE stack is a digital public infrastructure that has brought several geospatial datasets and pre-computed analytics together to build a comprehensive social-ecological understanding of a place. The datasets and outputs are hosted freely on the CoRE stack platform and accessible via APIs. Participants will be taken through a process on how to use the APIs to work out several relevant use-cases. A few such examples are listed below:
- Find the most drought sensitive villages in a tehsil: CoRE stack APIs can be used to pull pre-computed data of villages and watersheds in a tehsil of the time-series of cropping intensity since 2017, indexes like NDVI, drought and non-drought years, etc. These dataframes can be quickly analyzed to find which villages or watersheds show the highest sensitivity to droughts. Further data of the socio-economic situation of villages drawn from the census can be used to prioritize targeted drought protection programmes.
- Do an impact assessment of watershed development works: Similarly, to evaluate how successful a watershed development programme has been, the data pulled through CoRE stack APIs can be used to identify counterfactual watersheds based on co-variates like terrain, soil, drought occurrence, etc. and compute the difference in difference of outcome variables (cropping intensity, NDVI) between post-treatment and pre-treatment years in the treated and counterfactual watersheds.
We will also facilitate participants to install a local instance of the CoRE stack backend on their machines, and to build simple pipelines that can run locally installed geospatial libraries to create new data layers and analytics, or invoke Google Earth Engine APIs to trigger layer computations remotely and download the computed outputs, and add these layers to the backend.
Participants will thus gain a strong understanding to use the CoRE stack and contribute to it, as well as learn several geospatial programming tactics to use GEE and other platforms.
“High performant & large-scale geospatial visualisation using MapLibre and Deck.GL (with React)”
Srijit S Madhavan;
Workshop Proposals
This workshop is designed to teach you how to create highly performant, large-scale geospatial visualisations on the fly using Deck.GL and MapLibre as mapping libraries, in conjunction with React.
Participants will explore the different visualisation layers that can be generated using Deck.GL and understand the performance differences when rendering with and without Deck.GL.
The session will also cover how to leverage both GPU and CPU for geospatial data processing.
We will explore several key topics in this workshop, including:
- Setting up a local development environment and integrating Storybook with React.
- Installing essential packages such as Deck.GL and MapLibre.
- Understanding the necessity of using Deck.GL alongside MapLibre and demonstrating the performance impact of using Deck.GL.
-
Engaging in hands-on coding to create various visualisation layers, such as:
-
Scatterplot
- Heatmap
- Hexagonal/Grid Screen
- Animated Trip Layer
-
And more
-
Using datasets ranging from 500K to 1.5 million data points to generate real-time visualisations.
- Utilising both GPU and CPU for dataset processing and comparing performance differences.
- Applying real-world datasets and examples to understand practical use cases.
Pre-requirements:
- Install VSCode and Node.js/NPM
Coding knowledge:
- Basic understanding of JavaScript, Node.js, GIS, and web technologies
“How open source enabled the publishing of the first georeferenced regional plan for Goa by citizen volunteers - amche.in”
Arun Ganesh;
General Track
Goa was the first and remains the only state in India to have prepared a state-level regional land use zoning map. The plan is the outcome of a participatory process involving the Goa Town and Country Planning (TCP) Department, urban and rural local bodies, and the general public.
The currently notified plan, the Regional Plan for Goa 2021 (RPG-2021), demarcates settlement zones for construction and eco-sensitive zones for conservation across the entire state. The plan indicates several important regulatory lines such as Coastal Regulation Zone (CRZ) limits, wildlife sanctuary boundaries, eco-sensitive buffer zones, and no-development slopes, all of which strictly regulate land development.
The Challenge of Accessibility
Such a critical plan, which controls real estate development and shapes the life of every citizen, is made available to the public only as panchayat-level PDF documents. This severely limits its accessibility and utility to the public. Consequently, locating a specific GPS position on the regional plan became an activity that required extreme spatial knowledge and skill.
💡 The FOSS and AI Solution
To overcome this challenge, the citizens of Goa, in partnership with the open data community, utilized a novel stack of open-source tools and AI-assisted technologies. We georeferenced and pieced together over 200 individual PDF maps into a crowdsourced, statewide mosaic now available to every citizen at the click of a button.
The resulting map, along with several other regulatory data layers relevant to Goa's residents, has been published on a custom open-source geoportals – amche.in – using a federated system that operates at virtually no cost. This project aims to realize India's vision outlined in The National Geospatial Policy, 2022, by enabling easy access to valuable public geospatial data for every citizen through (CC-0 licensed) open-source software.
“Integrated Geospatial Property Survey and Tax Intelligence Solution for Smart Cities”
Narendra Makadiya;
General Track
Property tax is a major and dependable revenue source for municipal corporations, yet many local governments continue to struggle with outdated records, incomplete assessments, tax evasion, and inefficient manual surveys. To overcome these challenges, Nascent Infotechnologies Pvt. Ltd. has developed an open-source, technology-driven Property Survey Solution that streamlines the entire property assessment workflow. The system enables systematic data collection, verification, and analysis, improving accuracy, transparency, and operational efficiency in municipal revenue administration.
The solution includes an Android-based mobile survey application and a responsive web-based Quality Control (QC) portal. The mobile app works in both online and offline modes and allows field surveyors to capture geo-tagged property locations, record detailed building attributes, upload multimedia evidence, and validate data on-site. With intuitive GIS tools, built-in quality checks, and user-specific dashboards, surveyors can efficiently track tasks and progress. Its structured workflow—covering door status, property details, floor information, facilities, ownership data, attachments, and field verification—ensures standardized and reliable data collection across all property types.
The solution was built entirely using open-source technologies, including Native Android for the mobile application, PostgreSQL/PostGIS for spatial data management, GeoServer as the GIS server, and OpenLayers for interactive mapping. The backend is powered by Spring Boot, while Angular is used for the QC portal frontend, and OpenLDAP handles user authentication.
The QC portal enables validators to review, verify, and approve survey data from the office or field, ensuring a transparent and efficient validation process. It generates automated analytical reports on changes in property areas, variations in property types, and discrepancies identified during assessment. Real-time dashboards, audit trails, and performance indicators support administrators in monitoring survey progress, ensuring data accuracy, and making informed decisions.
A key strength of the system is its seamless integration with municipal ERP platforms, particularly property tax modules, enabling automated data exchange and unified revenue management. This integration supports identification of defaulters, analysis of defaulting trends, and detection of revenue leakage points. Built on open-source technologies, the solution is scalable, cost-effective, and adaptable for future enhancements.
Overall, this comprehensive framework helps municipal corporations modernize their property tax ecosystem, improve assessment accuracy, increase transparency, and enhance revenue generation to support smarter urban governance.
Keywords:
Property Survey, Open-Source GIS, Municipal Revenue, Property Tax Assessment, Mobile Data Collection, Offline Survey App, Quality Control Portal, ERP Integration, Geospatial Technology, Urban Governance, Revenue Enhancement, OpenLayers, GeoServer, PostGIS, Spring boot, Native Android, OpenLDAP, Angular
“Interoperability Challenges between GIS, CAD and CFD modelling in Open Source Geospatial Ecosystems”
Manavalan;
Academic Track (Oral)
Geospatial models which stimulate the real time air or heat flow as well as urban flooding scenarios are increasingly getting attention due to the present adverse climate change environment. Computational Fluid Dynamics (CFD) which has been well known for simulating any flow models takes the input in a 3D format called Stereolithography (STL) files and over which the boundary conditions were fixed for flow extent predictions. As on date in the FOSS Geospatial domain there is no tool that can support converting a 2D shapefile into a 3D STL file with acceptable geometry that can be straight away read by CFD tools. In line to this this article primarily experimented with a real time air flow case study of a smaller part of an urban region and tried to execute the flow simulation by converting the input shape file into a high precision STL geometry file that can be readable by FOSS based CFD tool. Over this GIS, CAD and CFD experiments many interoperability related technical gaps were experienced across the process workflow, which are captured and defined. Further working of these interoperable technical challenges will certainly lead to having a smooth FOSS based Geospatial workflow which can straight away take input of any urban area in shape file format and can be converted into a high precision STL file that can be used straight under CFD simulations. A FOSS tool of such sort will be a boon to the Geospatial researchers who are working with various environmental related flow simulations.
“Introduction to 5m Open-Source Satellite Data by ISRO: Exploring Opportunities for Community Development”
Harshaditya Gaur;
Workshop Proposals
Here’s a refined version of your workshop abstract — expanded to \~400 words, with improved grammar, smoother flow, and stronger academic tone while keeping it engaging:
Abstract
Access to high-resolution, open-source satellite data is reshaping the way communities, researchers, and policymakers address sustainability challenges. With the recent release of 5-meter spatial resolution datasets by the Indian Space Research Organisation (ISRO), India has taken a landmark step in democratizing geospatial intelligence. This initiative bridges a critical gap: on one end, medium-resolution datasets such as Landsat and Sentinel are widely used but often limited in detail; on the other, commercial high-resolution imagery provides finer insights but remains inaccessible to many due to high costs. ISRO’s open data offering now provides an unprecedented opportunity to place powerful, actionable geospatial insights into the hands of a much broader and more diverse community.
This workshop offers a dynamic and hands-on introduction to ISRO’s 5m open-source satellite imagery, focusing on its potential to drive community-led solutions across sectors such as agriculture, urban development, water resource management, and disaster response. The session will begin with an overview of the dataset’s key features—including spectral coverage, resolution, revisit frequency, and geographic scope—while highlighting why this initiative is a true game-changer for the open geospatial ecosystem.
The technical core of the workshop will guide participants through the complete data lifecycle. Starting with data access and download, the session will cover essential preprocessing workflows such as mosaicking, reprojection, and atmospheric correction. Participants will then explore advanced analysis using open-source platforms like QGIS and Python-based libraries including Rasterio, GeoPandas, GDAL, and xarray. participants will gain hands-on experience and highlight the real-world case-study on Synthetic Band generation, Precision Agriculture and Irrigation, Urban Growth Analysis etc.
By the conclusion of the workshop, participants will not only have acquired technical expertise but also a practical vision of how open satellite data can be translated into actionable insights that foster resilience, inclusivity, and innovation at the community level. Designed to be interactive and application-driven, the workshop will encourage participants to think beyond technical processing and explore pathways for real-world implementation.
This workshop is especially relevant for students, researchers, civic tech innovators, NGOs, and open-source enthusiasts who are eager to transform free, high-resolution datasets into meaningful geospatial projects. Attendees will leave with reproducible workflows, hands-on experience, and the confidence to initiate data-driven solutions in their own local or regional contexts, thereby amplifying the impact of ISRO’s open-data revolution.
“Introduction to geospatial analysis using PyGILE”
Bhuwan Awasthi, Sarawut Ninsawat, Venkatesh Raghavan;
Workshop Proposals
In this hands-on interactive workshop, we will introduce PyGILE, a free-to-use, easy-to-learn toolkit to help learners and instructors explore geospatial analysis and programming with Python. PyGILE is an educational toolkit that provides pre-configured Jupyter notebook environments with all the relevant geospatial libraries pre-installed and ready-to-go to support learners and instructors of geospatial analysis with no or minimal setup steps. This is a key part of our approach to eliminating barriers to entry that learners and new instructors often face in FOSS4G software stack.
PyGILE is built on existing open-source geospatial libraries from the broader Python data science and Geospatial communities and extended by the Geoinformatics Lab with an interactive, in-depth tutorial and learning curriculum that brings together some of the best resources and packages from the FOSS4G ecosystem into a cohesive, easy to use, self-contained package. PyGILE supports Windows, macOS and Linux platforms and is built on conda-forge and Miniforge/Mamba for reliable and cross-platform environment creation and installation.
In this workshop, the participants will work through PyGILE’s modular, comprehensive curriculum and get hands-on experience with key aspects of programming in Python with Jupyter notebooks, including topics such as Python programming basics, spatial data and structures, coordinate reference systems and projections, advanced vector analysis operations, raster processing with Rasterio, external sources of spatial data, and remote sensing in Python. The course is designed as a step-wise curriculum that gets more sophisticated as learners move through modules.
Participants will learn by doing, directly coding with instructors in a classroom-style setting, with assistance to work through installation and common error/debugging issues, and working through notebooks and applied, hands-on exercises using real-world data. PyGILE’s pre-configured, one-click, ready-to-go environment is intentionally designed to reduce, and in many cases eliminate, the frustration and difficulty with trying to independently set up the Jupyter environment with the necessary dependencies, which is often a barrier to entry for many learners and instructors new to geospatial Python programming and learning.
Learning outcomes of this course will include, but are not limited to, familiarity and ability to use and work with multiple coordinate systems and map projections, spatial vector operations such as spatial joins, buffers, and overlays, processing and analysis of raster data, accessing spatial data from a range of online sources and providers, and implementing and understanding the principles of remote sensing using Python. Participants will also learn best practices for reproducible geospatial research and scholarship and how to extend PyGILE to their particular use cases.
This workshop is suitable for learners, instructors, researchers, and professionals interested in learning geospatial analysis with Python in a self-contained and supportive classroom environment. We aim to make this workshop as accessible as possible so no prior Python experience is required for the beginner modules, although a basic understanding of programming concepts is assumed and helpful. The primary objective is for learners and instructors to leave the workshop with their laptops fully configured with the PyGILE Jupyter environment, with the notebooks and solution notebooks, sample datasets to explore and practice, and new confidence to apply these skills to their own research and learning.
“LAKe Evaluation and Ranking Algorithm (LAKERA): A Framework for Lake Health and Resilience Assessment”
Nitish Kumar;
Academic Track (Oral)
Lakes worldwide are experiencing unprecedented ecological degradation driven by climate change and anthropogenic pressures, with major water bodies such as the Aral Sea and Lake Chad losing over 90% of their surface area (Pham-Duc et al., 2020; Su et al., 2021), urban lakes such as Bellandur Lake suffering catastrophic pollution from untreated sewage and industrial effluents (Jamwal et al., 2023; Mishra et al., 2024), and global lake surface temperatures rising at 0.34°C per decade faster than oceans, thereby intensifying thermal stratification and evaporation (O'Reilly et al., 2015; Woolway et al., 2020). Despite these mounting threats, existing monitoring approaches remain fragmented, focusing on individual lakes or isolated parameters without providing systematic frameworks for comparative assessment across diverse geographical and climatic contexts. This study addresses this critical gap by presenting LAKERA (Lake Evaluation and Ranking Algorithm), a comprehensive satellite-driven framework that systematically evaluates approximately 14,000 Indian lakes through integrated assessment of climatic (precipitation, air temperature, evaporation, lake water surface temperature), physical (lake surface area), terrestrial (vegetation, urban extent, barren land), water quality (clarity via Forel-Ule Index), and socioeconomic (Human Development Index) parameters processed from over 10 million Landsat images spanning 2001 to 2021 using Google Earth Engine cloud computing infrastructure . The methodology employs a structured three-component scoring system that incorporates present conditions through normalized parameter values, temporal trajectories via linear regression slopes, and statistical reliability through p-value transformation, subsequently integrating these components into factor scores through weighted aggregation to generate dynamic health scores that enable both spatial comparison across lakes and temporal assessment of ecosystem resilience trajectories.
National-scale analyses across Köppen-Geiger climate zones reveal fundamentally distinct degradation profiles that demand zone-specific conservation priorities, with Cold Zone exhibiting water clarity dominance at 35% of zone degradation substantially exceeding evaporation (26%) and water temperature worsening at merely 1%, Temperate Zone displaying elevated socioeconomic influence with 22% HDI contribution combined with 16% air temperature representing the highest anthropogenic-thermal coupling nationally, Arid Zone demonstrating evaporation-morphometric coupling at 15% each driving physical lake shrinkage, and Tropical Zone experiencing distributed multi-stressor forcing where precipitation (14%), water temperature (13%), land cover (14%), and evaporation (13%) contribute nearly equivalently without single-parameter dominance. Health classification establishes that 40% of lakes exist in Critical Condition with 63% concentrated in Tropical Zone indicating systemic collapse, while Temperate Zone demonstrates the highest recovery capacity at 27% In Recovery compared to 9% in both Tropical and Arid zones, confirming that anthropogenic-dominated degradation responds more effectively to interventions than climate-driven tropical collapse where distributed forcing creates compounded degradation resistant to remediation. Morphometric analyses reveal size-neutral health distributions, with Critical Condition rates showing consistent patterns (40% for 0.05-0.5 km² lakes, 37% for 0.5-1 km², 35% for 1-5 km², 37% for >5 km²) across the entire size spectrum, establishing that overall health trajectories transcend morphometric controls.
These climate-zone vulnerabilities and resilience patterns are quantified through comprehensive health scores generated by integrating factor scores via weighted aggregation, enabling systematic classification of lakes into four distinct resilience categories: Resilient Lakes (high health with positive trends), Vulnerable Lakes (high health with declining trends), Recovering Lakes (poor health with improving trends), and Critical Lakes (poor health with declining trends). Validation across seven ecologically diverse lakes confirms framework robustness through systematic correspondence between computed health trajectories and documented field observations, with Sukhna Lake's catastrophic 2006 decline and subsequent recovery through 2013 aligning precisely with reported drying conditions and court-mandated restoration interventions (D.K. Singh & Singh, 2019), Bellandur Lake's prolonged 2012-2019 degradation matching widespread documentation of sewage discharge and surface fires followed by dramatic 2021 improvement corresponding with comprehensive treatment infrastructure completion (Jamwal et al., 2023), Hebbal Lake's consistent 2006-2019 improvement validating public-private partnership effectiveness through systematic sewage diversion and desilting operations (Mandal & Manasi, 2021), and Deepor Beel's catastrophic 2006 decline from dumping with gradual recovery from 2007 onwards corresponding temporally with the 2008 Guwahati Water Bodies Act enactment. The LAKERA web interface operationalizes these findings by enabling real-time visualization and dynamic parameter selection for stakeholders, providing interactive tools for evidence-based decision-making through health scores, temporal trajectory plots, radar charts displaying parameter contributions, and resilience classifications. This scalable framework supports integrated water resource management through cross-scale assessment from local to national levels and aligns with the National Geospatial Policy 2022 by establishing data-driven foundations for environmental monitoring, enabling lake-specific conservation strategies informed by dominant stressor identification, temporal trend analysis, and resilience classification, thereby advancing climate-resilient freshwater management and informing restoration priorities through systematic, evidence-based assessment of approximately 14,000 water bodies across diverse ecological contexts
“Managing and processing GeoParquet file using duckDB”
Thana Wannasang;
General Track
DuckDB is a high-performance analytical database engine specifically designed for fast and efficient processing of large datasets. Its key advantage lies in the ability to execute complex analytical queries directly without relying on heavyweight database systems or distributed computing infrastructures. This allows data scientists and analysts to explore and analyze large-scale datasets interactively, significantly reducing the complexity often associated with traditional database platforms. When combined with modern storage formats such as Parquet and GeoParquet, DuckDB provides a robust and versatile solution for managing and analyzing geospatial data at scale.
Parquet is a columnar data format optimized for reading, writing, and compressing large amounts of structured data efficiently. GeoParquet builds upon this foundation by embedding geospatial geometries such as points, lines, and polygons directly into Parquet files. This combination allows both tabular and spatial data to be stored together in a single, lightweight, and portable format, enabling seamless sharing and distribution of datasets across platforms. By maintaining both geometric and attribute data in one file, GeoParquet ensures consistency and simplifies data management for geospatial workflows.
Integrating GeoParquet with DuckDB enables fast and highly flexible geospatial data workflows. Spatial datasets, which are often extremely large and fragmented, can be processed directly from local disks or cloud storage systems such as Amazon S3 without requiring format conversion or pre-processing. Users can perform spatial operations such as joins, intersections, bounding box filtering, and spatial aggregations directly in SQL. This eliminates the need for heavyweight GIS software and allows for interactive querying even on datasets containing millions of geometries. DuckDB’s in-process architecture further ensures that queries execute quickly while consuming minimal system resources, making it practical for large-scale geospatial projects.
Another major advantage of DuckDB is its seamless integration with modern analytical ecosystems like Python. Users can combine SQL-based spatial processing with powerful geospatial libraries such as GeoPandas, Rasterio, or Shapely. This allows developers and data scientists to build hybrid workflows where DuckDB handles efficient data retrieval and aggregation, while other libraries focus on visualization, raster processing, or advanced geospatial computations. Spatial queries can be executed directly within DuckDB and results can be easily passed into Python notebooks or GIS dashboards, simplifying the workflow and maintaining performance.
In real-world applications, DuckDB serves as a fast backend for spatial analytics services, cloud-based data catalogs, and geospatial data processing pipelines. It can efficiently index, filter, and aggregate millions of spatial features, making it ideal for national-scale datasets, satellite imagery metadata, or large land-use and environmental monitoring datasets. Analysts can use DuckDB to generate analytical layers, produce aggregated statistics, and integrate outputs into visualization systems without significant performance penalties.
This session will focus on practical demonstrations of DuckDB in GIS applications, including managing GeoParquet files before GIS processing, performing spatial computations using geospatial extensions, and benchmarking performance. A key highlight will be a comparison between GeoPandas and DuckDB using load testing to show the significant efficiency and scalability gains achievable with DuckDB. By the end of the session, participants will have a clear understanding of how DuckDB can accelerate geospatial analytics workflows while simplifying data management and enhancing performance for large datasets.
“Mixed-Use Matters: Revisiting Building Function Classification for Indian Cities”
Sourabh Barala;
Academic Track (Oral)
Accurate information about building usage (or function) is essential for applications such as disaster response for better evacuation planning, urban growth monitoring and planning, and efficient building-energy management. However, in most Indian cities, such data is not available at the building level. Although the regional government does collect such building-level information for the property tax purpose, it is not available for public use, and in some cases, the information is not geo-tagged. Platforms like OpenStreetMap offer free access to the community-generated geospatial data, such as buildings and road geometries; they often lack detailed annotations on building usage. In some cases, such annotations are misleading. For instance, in an Indian city, it is common to have commercial activities on the ground floor of residential buildings, which are located along major roadways, yet in OSM data, they are often tagged as commercial. Moreover, annotating every building in a city is a laborious and practically challenging process. To address these challenges, we proposed a Machine Learning (ML) based approach to classify buildings in Hyderabad, India, into residential and non-residential categories. We extracted three 3 types of features, i.e. morphological features (structural characteristics of the buildings) such as building height, building's circular compactness, etc.; spatial features (spatial context of a building) such as adjacency (how adjacent buildings are in an area), street alignment (how a building align with the street); and features derived from OSM tags. For extracting morphological features, we used building and road network geometries from OSM and building height from Google’s Open Buildings 2.5D Temporal dataset. For spatial features, we utilised building and road network geometries along with population density extracted from the WorldPop data. As OSM tag features, we utilised values of various OSM tags such as "emergency" and "healthcare". To obtain ground truth, we grouped all possible values of tag: "building" from OSM data into two classes: "residential" and "non-residential". Buildings which could not be classified in either class were classified as "Unknown". We found that approximately 96% of the buildings were classified as unknown. Moreover, among known buildings, 94% of the buildings were grouped under the "residential" class, showing extreme class imbalance. As a preliminary analysis, we trained six ML models (Logistic Regression, Random Forest, Artificial Neural Network, XGBoost and Support Vector Classifier) to classify buildings as either residential or non-residential. To address the class imbalance during training, we tuned the classification threshold of the models using a 5-fold cross-validation. Our benchmarking results achieve a macro F1-score of 0.85, with class-wise F1-scores of 0.98 for residential and 0.71 for non-residential buildings. The lower performance for the non-residential class is primarily due to the presence of mixed-use buildings and the limited representation of large residential structures in the training data. For example, a building classified as residential by the model had the ground truth value as non-residential; on inspecting that building manually using Google streetview image, we found that the building houses a medical shop on the ground floor, while the upper floors are being used for residential purposes (hence the correct function-type of the building is "mixed-use"). Moreover, a building classified as non-residential by the model had the ground truth value as residential. We found that the building was a large apartment complex. Such large residential buildings were rare in the training data, leading to poor performance on the test data. Moreover, we hypothesised that a building is more likely to be used for multi-purpose if located closer to any major road, such as a motorway, a truck, etc. To ground this, we randomly sampled over 100 buildings lying within a range of 100 meters from the major roads, from both residential and non-residential buildings; we found that approximately 48% buildings were mixed-use. Overall, this preliminary analysis on building usage classification highlights the fact that although the OSM data is a great resource available in the public domain, it lacks accurate building usage information. Moreover, the automatic building classification using ML approaches has its own challenges, such as data imbalance, mixed usage, and inaccurate tagging of building usage. In future, we plan to incorporate a weak supervised or a semi-supervised learning paradigm along with the probabilistic classification to address the above-mentioned challenges.
“N-ViewAR : Visualize what’s beneath and manage beyond”
Yash Doshi;
General Track
Efficient management and visualization of underground utilities remain one of the most pressing challenges in urban infrastructure development. Traditional 2D mapping often fails to convey the spatial complexity of subsurface networks, leading to accidental damages, unsafe excavations, and costly disruptions. To address this challenge, Nascent Info Technologies Pvt. Ltd. has developed N-ViewAR, an open-source, augmented reality (AR) mobile GIS application that enables users to visualize and interact with underground utilities in a realistic 3D environment.
Built on open-source geospatial technologies such as PostGIS, GeoServer,mago-3d-tiler, and CesiumJS, N-ViewAR transforms GIS data into 3D Tiles for AR-based visualization. The application allows field engineers, municipal staff, and planners to overlay virtual 3D representations of underground assets—such as water, sewer, and telecom lines—onto the real-world environment using a mobile device. Users can view detailed asset attributes including depth, diameter, material, and installation date, facilitating accurate decision-making without physical excavation.
N-ViewAR also features an Automated workflow integrated with a PostgreSQL database, enabling seamless synchronization between the attribute data and 3D visualization. Any modification in asset attributes is automatically reflected in the corresponding 3D Tiles, ensuring that the visualization remains current and consistent with backend data.
By integrating spatial data with AR, N-ViewAR bridges the gap between digital geospatial intelligence and on-site field operations, promoting safer and more efficient infrastructure management. The solution minimizes dependency on paper maps, reduces human error, and empowers stakeholders to manage underground utilities with confidence. N-ViewAR demonstrates the potential of open-source GIS and AR frameworks in addressing real-world urban challenges through innovation, accessibility, and scalability.
Keywords: Open-source GIS, 3D visualization, Underground utilities, Augmented Reality, Urban infrastructure, CesiumJS,3d Tiles,postgres.
“Next generation tile server performance now”
Frank Elsinga;
General Track
Tile servers occupy a unique position in the map rendering pipeline: they can significantly accelerate a map’s performance - or silently become its biggest bottleneck.
While the industry has focused heavily on improving client-side rendering and data formats, many opportunities on the server side remain underexplored.
Features such as intelligent prefetching, data–style co-optimization, adaptive generalization, and selective data reduction are only beginning to show their potential. Yet, most existing tile servers and commercial products barely scratch the surface of what is possible.
In this talk, I will walk you through ongoing work at MapLibre that aims to push tile-server performance further than current systems allow.
Building on research into vector-tile data efficiency, spatial query optimization, and style-aware data pruning, we explore how tile servers can deliver smaller, faster, and more relevant tiles without sacrificing visual fidelity. This includes techniques that reduce network overhead, improve rendering times on low-power devices, and lower overall energy consumption.
You will gain insight into how these optimizations work and the tradeoffs that they do. I will go into the impact on real-world map styles, and how they fit into MapLibre’s broader roadmap. Most importantly, you will learn how these innovations will soon make their way into the next generation of open-source tile servers—possibly one running your maps.
“OGC–OSGeo Collaboration: Advancing Interoperability through Open Standards”
Harsha Vardhan Madiraju;
General Track
The Open Geospatial Consortium (OGC) and the Open Source Geospatial Foundation (OSGeo) have maintained a close collaboration aimed at strengthening the global geospatial ecosystem through openness and interoperability. Formalized under a long-standing Memorandum of Understanding, this relationship enables the two organizations to complement each other—OGC through consensus-based standardization, and OSGeo through real-world implementation in open software projects.
This session will illustrate how key OSGeo projects—such as GeoServer, GDAL/OGR, pygeoapi, pycsw, and GeoNetwork—implement a wide range of OGC Standards. These include well-established web services like WMS, WFS, and CSW, as well as the modern OGC API family (Features, Tiles, Maps, Records, and EDR). Together, these tools and standards provide a common language for data sharing, discovery, and visualization across platforms, ensuring that geospatial information can move freely between systems and communities.
The presentation will connect technical practice to strategic purpose: explaining how OGC Standards are embedded in everyday OSGeo tools, what interoperability means in practical terms for developers, and how this collaboration ensures that open solutions remain robust, trusted, and aligned with international best practices.
Attendees will gain a clear understanding of how OGC and OSGeo work together to translate standards into solutions—bridging policy and practice, enabling innovation, and fostering a more connected and sustainable geospatial future.
“Open-Source Drought Dashboard for Marathwada (1981-2025)”
Narendra Shrikrushna Tayade;
Academic Track (Oral)
Marathwada is the central part of Maharashtra. Which faces Drought due to high monsoon variability & water scarcity. Effective drought monitoring in this area requires long-term, high-resolution data that are scalable, transparent & accessible. In this study, we show drought utilising CHIRPS rainfall data and the Standardised Rainfall Anomaly (Z-score) method by using the Google Earth Engine platform.
The traditional method of drought assessments is hard and ground-based observations, which are not consistent. To overcome the limitation, the objective of this work is to compute 45 years (1981-2025) of anomalies in rainfall using open source datasets & cloud-based analysis. CHIRPS monthly precipitation data were processed in GEE to calculate long-term means, standard deviation & annual Z-score anomalies. Drought was classified into normal, moderate, severe & extreme. The result of drought maps was published through the interactive dashboard built using VS Code, githib & Render.
Keywords: Drought, Dashboard, Google Earth Engine, CHIRPS.
The analysis shows that several drought years, including 1982, 2002, 2015 & 2018. These years have seen drought across Marathwada’s districts. Our goal is to successfully capture long-term rainfall departures & reveal drought hotspots. This study shows how open source geospatial tools & cloud-based platform GEE can provide a reliable, scalable & accessible framework for drought monitoring & climate change assessment in semi-arid areas.
“Open-Source Geospatial Assessment of Land Use/Land Cover Dynamics and Land Surface Temperature Using Google Earth Engine: A Case Study of Delhi”
Mohammed Faizan;
Academic Track (Oral)
Climate variability and expanding anthropogenic activities continue to drive significant modifications to terrestrial ecosystems. Land Use and Land Cover (LULC) change, in particular, plays a crucial role in influencing regional climate patterns, surface energy balance, and environmental sustainability. As a result, the use of free and open-source geospatial platforms has become indispensable for consistent, large-scale assessment of landscape change. This study leverages Google Earth Engine (GEE) and open-access satellite imagery to evaluate two decades (2000–2020) of LULC dynamics and their impact on Land Surface Temperature (LST) in Delhi, India.
Using Landsat datasets at 10-year intervals, five LULC categories were mapped through a supervised Random Forest approach, yielding high classification accuracies of 92% in 2000, 89% in 2010, and 91% in 2020. The analysis reveals significant transitions, marked by declines in agricultural land, water bodies, and bare land, alongside substantial expansion of built-up areas and forest cover. LST assessment shows a pronounced warming trend, with summer temperatures rising from 34.86°C in 2000 to 51.54°C in 2020, while built-up regions consistently exhibited the highest thermal values. Correlation results further indicate negative relationships between LST and NDVI/NDWI, and a strong positive association with NDBI, highlighting the thermal impacts of urban expansion.
The study demonstrates the effectiveness of GEE and open-source geospatial tools in long-term environmental monitoring and provides valuable insights for urban planners, environmental professionals, and policymakers engaged in developing climate-responsive and sustainable land-use strategies.
“Optimizing Cloud Optimized GeoTIFF (COG) for Faster Access and Better OGC EDR API Performance”
PEERANAT PRASONGSUK;
General Track
As cloud-native geospatial systems continue to evolve, the Cloud Optimized GeoTIFF (COG) format has become one of the most widely adopted standards for hosting and accessing raster data efficiently on cloud storage. While COG provides a structure that supports HTTP range requests and progressive access, its performance in real-world scenarios still depends heavily on how the files are created and configured. Small differences in compression type, internal tiling, and overview strategy can have a significant impact on read times, data transfer cost, and the responsiveness of downstream APIs such as the OGC Environmental Data Retrieval (EDR) service.
This presentation shares practical experience and lessons learned from optimizing the COG creation workflow in a production environment. The primary goals were to (1) reduce file size, (2) minimize read time, and (3) improve response time when delivering raster data through EDR APIs. Using open-source tools such as GDAL and rio-cogeo, several experiments were conducted to compare compression codecs (DEFLATE, ZSTD, and LZW), block sizes, and the use of internal versus external overviews. Each configuration was benchmarked for file size, I/O performance, and HTTP range request efficiency.
The findings revealed that using ZSTD compression with block sizes between 256–512 pixels, along with internal overviews, offered the best balance between compact storage and fast access. In test cases with Sentinel-2 imagery, optimized files were on average 10–15% smaller than standard DEFLATE-based COGs and achieved 30–40% faster read speeds when accessed through cloud storage. These improvements directly enhanced the performance of the EDR API, enabling faster and more efficient on-demand data delivery without requiring additional compute resources or complex caching mechanisms.
Beyond the technical adjustments, this optimization effort also provided broader insights into designing more efficient cloud-native raster pipelines. It emphasized how careful control over COG parameters—such as tiling scheme, overview strategy, and compression—can substantially influence the user experience for web applications, analysis tools, and automated EDR workflows. The talk will also highlight challenges encountered during testing, including trade-offs between compression strength and read latency, as well as strategies for balancing cost efficiency with performance when hosting data on object storage platforms like AWS S3 or Google Cloud Storage.
Attendees will gain a clear understanding of which parameters matter most for their own use cases and how simple adjustments during the COG creation process can translate into major performance benefits downstream. The session aims to provide practical, reproducible guidelines for practitioners who want to make their raster data pipelines more efficient, responsive, and scalable within modern cloud environments.
“QGIS Automation with Python Actions”
Ujaval Gandhi;
Workshop Proposals
QGIS allows you to define custom Actions on map layers. Actions can launch commands or run python code when the user clicks on a feature from the layer. This workshop will cover QGIS Actions in detail along with use cases on how you can harness its power to automate GIS workflows. We will focus on Python Actions and go through various examples of implementing new functionality and automating tasks with just a few lines of PyQGIS code.
Many QGIS customizations can be an Action instead of a plugin.
Actions provide you with an built-in functionality to Add a button with menu items in the toolbar/attribute table/attribute form. Execute any Python code when the user picks an action and clicks on a feature. Distribute your customizations with a QGIS Project.
- Extract a Feature from a Layer: We will create an action that takes a layer of all countries in the world and allows you to extract any country polygon by clicking on it.
- Automate Data Editing and Selection: In this section, we will work with a dataset of land parcels and learn how QGIS Actions can be used to speed up data selection and editing.
- Manage Imagery Collections: Actions also provide a simple and intuitive way to manage large imagery collections using QGIS. In this section, we will learn how to create a Tile Index and setup actions to interactively load and remove raster layers of interest.
- Select Features in a Buffer Zone: Another useful application of action is to select features from a layer within a buffer zone.
- Reversing Line Direction using Processing Algorithm: The QGIS Processing Toolbox contains many useful algorithms. You can call any algorithms using Python from Actions. This example shows how to setup an action to run a processing algorithm a line feature to reverse direction.
- Creating Isochrones using ORS Tools plugin: Similar to native algorithms, we can also call any third-party algorithms added from QGIS Plugins. This example shows how to use the ORS Tools → Isochrones → Isochrones from point algorithm to generate a walking-directions isochrone from a point layer.
- View Panorama from Mapillary: QGIS actions can also be used to query an external API and display the results. The example below shows how to use the Mapillary API to fetch street-level imagery and display them in QGIS.
“Road network characteristics and Land Surface Temperature in two South Indian cities: Bangalore and Chennai.”
Ellen Brock;
Academic Track (Oral)
In this study, the relationship between land surface temperature and several road network characteristics is investigated using data from OpenStreetMap and Landsat, for two South Indian cities: Bangalore and Chennai. Using zonal statistics, an analysis is done at the ward level and road network characteristic metrics such as road density, betweenness, straightness and road centrality are used. The results reveal that the correlation between road density and land surface temperature is positive and statistically significant. Preliminary analysis also shows a correlation between the three other road network characteristics and Land Surface Temperature.
The interaction between road characteristics and Land Surface Temperature (LST) is not yet fully understood in general and in India in particular. Few studies have been done for India addressing the interaction between road characteristics and Land Surface Temperature (LST). Exceptions are Mathew et al. (2022) for Ahmedabad and Brock (2024) and Dasgupta and Kumar (2025) for Bangalore who study the link between road density and LST. Mathew et al. (2022) use lower resolution MODIS data for measuring LST while Brock (2024) and Dasgupta and Kumar (2025) use higher resolution Landsat data. Also, instead of digitising roads like in Mathew et al. (2022), which is time-consuming, readily available data from Open Street Map (OSM) are used in this study, like in Brock (2024) and Dasgupta and Kumar (2025).
The reason for studying the interaction between road network characteristics and LST is that the understanding of these patterns can assist in urban planning of road networks in order to mitigate any elevated LST.
Based on earlier work (Brock 2024) investigating the interaction between LST and road density, this study is done at the ward level. While Dasgupta and Kumar also look at other metrics at the ward level using a regression analysis (such as built-up, vegetation, etc.), this study mainly focuses on the interaction between road network characteristics and LST (next to potentially incorporating NDVI) and this for both Chennai and Bangalore. Regarding road density, we make a distinction between road density for all the roads and only the drivable roads. The reason is that drivable roads can have more traffic, less shade, etc. and hence lead to more LST.
Among others, we follow the work of Chenary et al. (2023) and Guo et al. (2024) and incorporate road network metrics such as betweenness, straightness and road centrality. However, these studies do not consider road density as a road network characteristic. Increased road density is expected to have a positive impact on LST. Increased impervious surfaces (such as roads) in a city lead to higher absorption and storage of heat, leading to elevated land surface temperatures and higher LST (Naserikia et al. (2023)). Closeness is defined as the reciprocal of the mean value of the path length between node i and every other node in the road network. Higher closeness indicates a more dense street network and hence we expect higher LST. Betweenness centrality of a road measures the fraction of all shortest paths in a network that pass through that road. A positive relation between betweenness centrality and LST is expected as roads with increased betweenness are potentially having more traffic (and hence more LST due to emissions).
The analysis here is fully automated in Python and several packages such as osmx, geemap and stats are used. The code will be fully made open source.
A statistically significant and positive relationship is found between road density and LST. Initial results also reveal some links between LST and the other three road network characteristics.
References
Chenary, K., Soltani, A., & Sharifi, A. (2023). Street network patterns for mitigating urban heat islands in arid climates. International Journal of Digital Earth, 16(1), 3145–3161. https://doi.org/10.1080/17538947.2023.2243901
Guo, N., Liang, X., & Meng, L. (2024). Evaluation of thermal effects on urban road spatial structure: A case study of Xuzhou, China. Heliyon, 10(17).
Mathew, A., Sarwesh, P., & Khandelwal, S. (2022). Investigating the contrast diurnal relationship of land surface temperatures with various surface parameters represent vegetation, soil, water, and urbanization over Ahmedabad city in India. Energy Nexus, 5, 100044.
Naserikia, M., Hart, M. A., Nazarian, N., Bechtel, B., Lipson, M., & Nice, K. A. (2023). Land surface and air temperature dynamics: The role of urban form and seasonality. Science of The Total Environment, 905, 167306.
“Scaling FOSS-Driven Metric Addressing Across Five Nepalese Cities”
Nishon Tandukar, Hemant Mahatara;
Academic Track (Oral)
This talk explores the successful implementation, scaling, and customization of an UAV imagery-based metric addressing system across seven rapidly urbanizing cities in Nepal. These cities include the pilot city of Changu Narayan, and its expansion to Janakpurdham, Birgunj, Tokha, Suryodaya, Kageshwori Manohara, and Bheemdatta. We will detail how this project provides a scalable, reliable, and precise solution to the significant challenges posed by non-existent or informal addressing, which has historically hampered essential municipal services.
In many Nepalese cities, the lack of a standardized system has led to practical delays in public and commercial service delivery, waste management, property tax collection, and utilities management. Our system directly addresses these issues. The absence of a reliable street naming and house addressing system is a common problem in developing countries, presenting a significant challenge for navigation, e-Governance, e-commerce, and efficient service delivery. This talk will detail our journey to solve this problem in Nepal, from a pioneering local initiative to a scalable model now implemented across seven cities.
Our work began by recognizing the limitations of existing methods. Systems like what3words and Google Plus codes present challenges with localization and cost, while satellite image-based approaches are not effective in the densely populated urban areas and narrow road networks typical of Nepal’s cities. In response, Changu Narayan Municipality, a rapidly urbanizing area in the Kathmandu Valley, became the site of an innovative project. This was the first of its kind in Nepal to identify, name, and number all of its road networks and households using advanced technologies.
At its core, our solution is a metric-based algorithm that assigns house numbers based on measured distance from a reference point along a road. This method offers a distinct advantage over traditional street-naming systems, especially in areas with complex road networks.
The entire implementation was powered by a suite of Free and Open Source Software (FOSS), making it both cost-effective and highly adaptable. The workflow integrates several key technologies: high-resolution imagery from drone mapping, household data collection via Open Data Kit (ODK) and participatory mapping, and AI-assisted digitization. The core address generation algorithm, developed in Python with libraries like GeoPandas and Shapely, processes this data to calculate addresses. Municipal officials now manage this system through an interactive GIS dashboard built with ReactJS and MapLibre. This FOSS-driven approach replaced time-consuming manual surveys, reducing address generation from a full day of fieldwork to just minutes, empowering local municipalities to manage sustainable urban growth effectively.
“Seeing Beyond the Visible: Hyperspectral Analytics with Python & Deep Learning”
Harshaditya Gaur;
Workshop Proposals
What if you could see the world in hundreds of bands invisible to the human eye? 🌈 That’s the promise of hyperspectral remote sensing, a technology that captures subtle spectral variation in applications like agriculture, forestry, mineral exploration, water quality monitoring etc., hyperspectral data is transforming how we study earth and beyond conventional multispectral imagery.
But here’s the challenge: hyperspectral cubes are voluminous, complex, and not always easy to process. This is where open-source tools and Python-powered deep learning come to the rescue.
In this hands-on workshop, we’ll explore into the practical side of hyperspectral imaging using free, open and accessible software. No expensive licenses, no black boxes, just the python based open-source toolbox's Spectral Python (SPy), Rasterio, GDAL, Scikit-Learn, EnMap Box, and PyTorch at your fingertips.
Start by demystifying the hyperspectral data, how to open cube files and visualize them with different band combination using the open-source hyperspectral datasets from DLR EnMAP, Wyvern, and NASA AVIRIS-NG. Then, exploring the dimensionality reduction techniques (PCA, MNF etc) to reduce data complexity and improve Signal to Noise Ratio (SNR) for further analysis. Building on that foundation, moving forward with hands-on with machine learning classifiers (RF, SVM, SAM) and deep learning architectures (CNNs, autoencoders) for powerful applications like land cover mapping, and environmental change detection. Lastly, a quick demo with EnMap Box, QGIS plugin for hyperspectral data analysis, followed by some industrial case-studies and QnA session.
By the conclusion of the workshop, participants will have acquired not only technical skills in hyperspectral data analysis but also an understanding of the challenges and future directions in this domain. Gaining the practical experience with open-source workflows that support reproducibility and scalability, essential for addressing planetary-scale challenges.
The workshop is a interactive session: following along with real datasets, coding together and walk away with ready-to-use workflows for your own projects. By emphasizing both on methodological aspect and practical application, the session aims to bridge the gap between hyperspectral theory and operational practice, fostering broader adoption of open-source solutions in hyperspectral remote sensing.
Whether you’re an EO researcher, a developer curious about geospatial analysis, or a student eager to dive into hyperspectral analytics, this session is will a gateway to the next frontier of remote sensing.
So join and learn how to see beyond the visible—because the future of Earth observation lies in the invisible. 🌍✨
“SEMS Data Model: A Standards-Based Framework for Cross-Utility Geo-Spatial Big Data”
Venkata Satya Rama Rao Bandreddi;
Academic Track (Oral)
Modern utility systems are rapidly evolving into data-intensive ecosystems, driven by the growing deployment of smart sensors, real-time monitoring devices, and location-aware technologies. These systems increasingly require interoperable and scalable data architectures capable of handling massive, high-frequency sensor streams across electricity, water, and gas networks. In this context, the Spatial Energy Management System (SEMS) Data Model is introduced as a generic, extensible, and open-standards–driven framework designed to support the integration, storage, and visualization of multi-utility sensor and spatial data. The model is aligned with the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) standards and implemented through the SensorThings API, ensuring interoperability, semantic consistency, and efficient data exchange across heterogeneous systems.
A distinctive aspect of the SEMS data model lies in its capacity to manage Geo-Spatial Big Data, which emerges from the continuous generation of sensor observations with both spatial and temporal dimensions. Smart meter sensor data, captured at very short time intervals, exemplifies the characteristics of Big Data, exhibiting high volume, and temporal density. Managing this inflow of data requires a robust system that not only ensures efficient storage and retrieval but also preserves the spatial relationships essential for contextual analytics. The spatial context plays a critical role, as localized conditions such as urban density, climatic variation, and topology that directly influence consumption patterns. Therefore, the SEMS data model is designed to maintain spatial integrity and enable high-performance queries for spatio-temporal analysis, making it suitable for diverse smart utility applications.
The SEMS architecture is implemented using the FROST Server, which serves as the SensorThings API–compliant interface layer for managing sensor observations according to OGC SWE specifications. At the data persistence layer, SEMS employs a PostgreSQL database with the PostGIS extension, facilitating spatial indexing, geometric operations, and topological analysis. To further enhance data retrieval efficiency, the system introduces a Spatio-Temporal Aggregation Layer, a post-processing component that aggregates sensor data at multiple spatial and temporal resolutions. This layer acts as an intermediary between the raw data repository and the APIs that query it, performing pre-aggregation and summary computations. By generating multi-resolution data representations, the Spatio-Temporal Aggregation Layer significantly improves query response times and reduces computational overhead for high-frequency analytical workloads. The overall architecture supports two parallel data interaction mechanisms: (1) standardized SensorThings API endpoints that enable open, interoperable data exchange for external systems, and (2) direct REST endpoints that provide flexible access for analytics and visualization services.
To evaluate its applicability, the SEMS data model is demonstrated through an energy utility case study focusing on the Jeedimetla region of Hyderabad, India, where smart meters have been deployed for a distribution network pilot project. The study utilizes simulated smart meter data modeled on observed consumption patterns of residential, commercial, and industrial consumers. Within the SEMS web application, the data is visualized through two major components: (1) a spatial visualization engine, which maps the electric network infrastructure and consumption clusters on an interactive map, and (2) a temporal analytics dashboard, which provides dynamic time-series visualization for smart meter data. These features collectively enable utility operators to explore energy usage variability, detect anomalies, and evaluate spatially driven consumption behavior.
The evaluation demonstrates that the OGC-compliant SEMS Data Model, enhanced with the Spatio-Temporal Aggregation Layer offers efficient mechanisms for querying at multiple spatial levels, and visualizing spatio-temporal data across multiple utility domains. The study also highlights certain limitations of the SensorThings API, particularly in handling very high-frequency interval data, which can result in performance bottlenecks when managing large-scale smart meter datasets. However, the introduction of the Spatio-Temporal Aggregation Layer within the SEMS architecture effectively alleviates these challenges by pre-aggregating and optimizing sensor data, thereby enhancing query responsiveness and overall system scalability. The framework’s open and modular design further supports continuous optimization and scaling to accommodate future data growth and multi-utility integration.
In summary, the SEMS Data Model provides a robust and standards-aligned foundation for Geo-Spatial Big Data management across diverse utility sectors. By combining open geospatial standards, spatial databases, and web-based visualization frameworks, SEMS enables interoperable data access, rapid retrieval, and advanced analytics for real-time operational intelligence. The integration of the Spatio-Temporal Aggregation Layer further enhances scalability and responsiveness, positioning SEMS as a comprehensive solution for managing, visualizing, and analyzing Geo Spatial Big data in modern smart infrastructure systems.
“Shortest Route Selection for Medical Emergencies in Bengaluru city, Karnataka, India”
Adwait Priyadarshan;
Academic Track (Oral)
Adwait Priyadarshan and Manish Kumar Mishra*
Department of Computer Science & Engineering, IIIT-Bangalore-560100
Corresponding Author.. Adwait.Priyadarshan@iiitb.ac.in
*Environmental Monitoring and Assessment Division, BARC, Mumbai-40085
Email: manishkm@barc.gov.in
The thumb-rule of ‘Golden Hour’ highly prioritize the importance of time management in case of traumatic injury and medical emergencies (Nyman, 2023). Hospital admission for lifesaving within less than an hour increases the survival quotient of the patient (Alagappan, 2025; Lerner, and Moscati, 2001). World Health Organization (WHO) has extended the concept for health emergencies highlighting that the rapid action saves life and reduces morbidity. Congested city traffic often delays the transport of patients during medical emergencies. Tomtom Traffic Index for the year 2024, a survey conducted among the 501 cities across 62 countries, Bengaluru ranked 3rd in most congested worldwide; The average travel time for covering a distance of 10 km is 34m 10s resulting in a loss of 117 hours in a year (https://www.tomtom.com/traffic-index/ranking/). The latest survey carried out during Oct-Nov, 2025 the city ranked 16th in Global Traffic Congestion Index (https://trafficindex.org/?order=avg). Under such a situation, despite the availability of desired medical care support system (Ramanayake et al, 2014), patient movement of the injured or diseased to a hospital with Critical Care Unit (CCU) need a careful examination for the route selection. The limitations of manual route selection during the medical emergency may lead to worsening of the patient's condition, resulting in suffering of the concerned individual and their family. The delayed admission may occur due to congested traffic, poor road conditions, and the ambulance driving crew's judgmental approach. An automated geospatial route analysis can work as Decision Support System (DSS) in finding the shortest route for ambulances during the emergency. The shortest path is decided through an OD (Origin-Destination) cost-matrix analysis taking into consideration the least cost path with respect to the shortest time, the shortest distance and or the most optimal route from source to destination at a particular time of the day (office-hours cause major traffic delays). Further, the route selection should also ensure a minimum response time. This response time depends on the condition of the emergency response vehicle and the possible obstructions or prohibitions along the road network. The study presented here is describing the model for the shortest route in Bengaluru city (Urban area) for an ambulance to reach the nearest hospital facility. In this paper, an open-source GIS platform has been used as a geospatial network tool for working out the road network of ambulances in case of a medical emergency. The major hospitals of Bengaluru city were overlayed along with the road layer, downloaded from Open Street Map (OSM), in QGIS (ver. 3.44 Solothurn); the investigation generates multiple sets of service areas starting from 1 km (1000 m) to 5 km around healthcare facilities to evaluate the spatial coverage. For this geospatial network analysis, a set of fifteen (15) hospitals has been identified, both privately owned and government facilities, with quality infrastructure to represent the centroids for further accessibility modeling. It has also been tried to estimate the population coverage around these major hospitals using Voronoi Polygons and Global Human Settlement Layer (GHSL). The methodology will enable spatial network analysis tools for visualizing and quantifying the accessibility of emergency medical care destinations, thereby offering a practical approach for ambulance fleet managers, emergency responders, social support systems, public health authorities, and policy planners. Although a convenient approach using GIS is provided in this paper, inclusion of few more factors such as barriers or blockage on the road, width and quality of the road, and weather condition can further refine the proposed model.
References
1. Alagappan, Dhavapalani. “When Every Second Counts – How to Be Emergency Ready in Times of Health Crises.” Health. The Hindu, May 27, 2025. https://www.thehindu.com/sci-tech/health/when-every-second-counts-how-to-be-emergency-ready-in-times-of-health-crises/article69624516.ece.
2. Lerner, E. B., and R. M. Moscati. “The Golden Hour: Scientific Fact or Medical ‘Urban Legend’?” Academic Emergency Medicine: Official Journal of the Society for Academic Emergency Medicine 8, no. 7 (2001): 758–60. https://doi.org/10.1111/j.1553- 2712.2001.tb00201.x.
3. Nyman, Par. “The Critical Importance of Early Emergency Response: Saving Lives in the First Hour.” Journal of Labor and Childbirth 6, no. 5 (2023): 135–36.
4. Ramanayake, R. P. J. C., Sudeshika Ranasingha, and Saumya Lakmini. “Management of Emergencies in General Practice: Role of General Practitioners.” Journal of Family Medicine and Primary Care 3, no. 4 (2014): 305–8. https://doi.org/10.4103/2249-4863.148089.
“Spatial Flood Susceptibility Assessment in Central Nepal Using GIS, AHP, Sensitivity Scenarios and OpenStreetMap Stream Data”
Prativa Thapa;
Academic Track (Oral)
Floods are among the most frequent and destructive natural hazards in Nepal, causing widespread damage to live, infrastructure, agriculture, and ecosystems. The country’s rugged terrain, monsoon-driven climate, and rapidly expanding settlements make flood risk assessment a critical priority for sustainable development and disaster preparedness. This study presents a comprehensive GIS-based flood susceptibility mapping of the Sunkoshi River Basin and six adjoining districts—Kavrepalanchok, Sindhuli, Dolakha, Ramechhap, Sindhupalchok, and Okhaldhunga—using a Multi-Criteria Decision Analysis (MCDA) framework integrated with sensitivity analysis.
Nine flood-influencing parameters were selected based on their hydrological and geomorphological significance: slope, aspect, rainfall, drainage density, distance from river, Normalized Difference Vegetation Index (NDVI), Topographic Wetness Index (TWI), land use/land cover (LULC), and soil type. The Analytic Hierarchy Process (AHP) was employed to assign relative weights to each criterion, reflecting their contribution to flood risk. These weighted layers were then integrated using a weighted overlay technique within a GIS environment to generate a flood susceptibility map that classifies the region into four risk zones: very low, low, moderate, and high.
A key innovation of this study lies in its incorporation of sensitivity analysis through four distinct weighting scenarios: equal weights, increased weight for drainage density, increased weight for rainfall, and increased weight for distance from river. This approach allowed for a robust evaluation of how individual parameters influence flood susceptibility outcomes, enhancing the reliability and adaptability of the model under varying environmental conditions.
OpenStreetMap (OSM) played a pivotal role in this research by providing freely accessible, community-curated spatial data. OSM was instrumental in sourcing and validating key layers such as drainage networks, settlement footprints, and road infrastructure. Its integration not only improved the granularity and coverage of the input datasets but also underscored the power of open data in disaster risk modeling. By leveraging OSM, the study promotes transparency, replicability, and local engagement—empowering mapping communities and youth-led initiatives to contribute directly to climate resilience and spatial planning.
The resulting flood susceptibility maps offer actionable insights for policymakers, urban planners, and disaster management authorities. They can guide infrastructure development, emergency response strategies, and long-term mitigation efforts in flood-prone regions of Nepal. Moreover, the methodology serves as a replicable model for other mountainous regions facing similar hydrological challenges, especially where open data and community mapping initiatives like OSM are actively growing.
This research not only advances technical understanding of flood risk but also demonstrates how open geospatial platforms and youth-led mapping movements can bridge scientific rigor with inclusive decision-making. It reflects the potential of integrating academic expertise with grassroots data contributions to build resilient communities in the face of climate uncertainty.
“Spatio-Temporal Analysis of Key Air Pollutants and Hotspot Identification for Environmental Justice: A Case Study of Kathmandu Valley”
Nishan Bhattarai, Pragati Dhakal, RESHMA SHRESTHA, Manash;
Academic Track (Oral)
Air pollution is one of the major risk factors for death and disability in many developing cities around the world. The seriousness of this problem is exacerbated by rapid urbanization and industrialization in these places. Within Nepal, Kathmandu Valley is the most vulnerable place for pollution, which during the peak polluted episode is covered by a toxic haze of dust and smoke. This is due to its dense population, vehicular congestion, clustered industries, and bowl-shaped topography that restricts atmospheric dispersion. The primary studies regarding air pollution in Nepal have focused on particulate matter, while spatio-temporal variability and clustering patterns of gaseous pollutants have been less understood. Therefore, a proper understanding and addressing this gap is crucial not only for scientific assessment but also for advancing environmental justice, as pollution burdens often fall disproportionately on densely populated, low-income, or environmentally marginalized communities.
The main objective of study is to assess the spatio-temporal dynamics of four key air pollutants (NO₂, SO₂, CO, and O₃) from 2019 - 2024 in the Kathmandu Valley and to identify statistically significant hotspot zones. Specifically, the study aims to (i) detect the change in pollutant levels across different seasons and years, and (ii) identify areas with consistently high or low concentrations of pollutants that help to reveal long-term pollution hotspots and their implications for urban environmental quality.
In this analysis, Google Earth Engine is used for analysis, where Sentinel-5P TROPOMI satellite observations are used for multi-year time-series processing and seasonal aggregation. For each pollutant, average concentration maps were created for both annual and seasonal periods, covering the four major climatic seasons: pre-monsoon, monsoon, post-monsoon, and winter. These maps were then analyzed in QGIS using the Hotspot Analysis tool to compute Getis-Ord Gi* statistics, identifying statistically significant clusters of unusually high (hotspots) and low (cold spots) pollution levels.
The findings revealed that the amount of NO₂ and CO consistently remains high in the central and southern parts of the Kathmandu Valley, where urban density and traffic are at their peak. Their levels are found to be highest during the winter and pre-monsoon seasons, when the change in temperature and weak air circulation trap pollutants near the ground. The concentrations of these pollutants dropped noticeably right after COVID-19 lockdown but as the human activities resumed after COVID-19, it gradually increased. Compared to NO₂, CO was spread more widely across the area, with occasional increases toward the western and southeastern edges, likely due to fuel burning and mixed land-use activities. The SO₂ levels tended to cluster in the northwestern and outer parts of the valley, which is likely the result of nearby industrial and brick kiln emissions. There is also a slight decrease in SO₂ levels from 2019 to 2024 which is due to reduced industrial activities. In contrast, O₃ showed an opposite pattern to NO₂, with higher concentrations in the northern and eastern parts of the valley. Its peak occurred during the pre-monsoon season, which aligns with the strong sunlight and availability of precursor gases that promote ozone formation. However, O₃ shows relatively low variability between maximum and minimum values, both spatially and temporally, because it’s a secondary pollutant formed photochemically in the atmosphere, not emitted directly from point sources.
Hotspot analysis done using Getis-Ord Gi* revealed that significant high-value clusters of NO₂ and CO were found consistently in the central part of the Kathmandu Valley, with slight winter expansion. While, SO₂ hotspots were in contrast and appeared more scattered and localized. O₃, on the other hand, showed high-value clusters mainly in higher-elevation fringe areas, showing an opposite spatial pattern. Comparing seasonal hotspot maps from 2018 to 2024 showed that while the overall size of NO₂ and CO hotspots decreased slightly, their main spatial patterns stayed largely the same, indicating persistent core emission zones with only minor seasonal and yearly shifts.
Gaseous pollution data are scarce. This study relies entirely on openly accessible satellite data and fully open-source analytical platforms, demonstrating the capability of FOSS4G tools for advanced air quality research even during the time of data scarcity. The combination use of Google Earth Engine to track changes over time and Getis-Ord Gi* stats to pinpoint pollution hotspots has provided a flexible way to keep tabs on air pollution. The findings show that winter and pre-monsoon remain the most critical pollution periods for primary gases (NO₂, CO, SO₂). But O₃ follows a photochemical pattern which is seen peaking in pre-monsoon months. By highlighting communities that might be disproportionately exposed to dangerous air pollutants, identifying these persistent hotspot zones is crucial for environmental justice assessments. This study adds to the mounting evidence that FOSS4G tools can successfully connect policy applications, environmental equity concerns, and scientific research.
“Standards are Boring, but not Unimportant”
ark Arjun;
General Track
Geospatial standards are rarely headline material. They are viewed as rigid, technical, and often “boring” — until their absence turns data sharing from a convenience into a crisis. This talk argues that standards are not just compliance checkboxes: they are core infrastructure that enables open data to become genuinely reusable, powering everything from routing engines to real-time planning dashboards.Using lived examples from aviation and urban transit, the session shows how invisible standards underpin the systems we rely on. The talk highlights why interoperability, not just openness, is essential for geospatial data
Central to the talk are the FAIR principles: Findable, Accessible, Interoperable, and Reusable. While often quoted in policy documents, FAIR becomes tangible when applied to everyday geodata: why can some city bus routes be instantly integrated into maps and journey planners, while others remain trapped in PDFs or proprietary formats? The session breaks down each element of FAIR using practical geospatial examples such as GTFS for public transport, OGC standards, and open licensing. Participants will see how metadata, identifiers, vocabularies, and licences determine whether data can be combined, automated, and scaled
The presentation also addresses the hard parts: legacy systems, historical non-standard data, organisational resistance, costs of change, and the confusion created by multiple overlapping standards. Rather than treating these as purely technical issues, the talk frames them as socio-technical challenges involving incentives, governance, and community processes.
“State of the MapLibre Tile Format”
Frank Elsinga, Bart Louwers;
General Track
The MapLibre community is currently in the midst of developing the MapLibre Tile Format, a modern, open, and fully community-governed successor to the ubiquitous Mapbox Vector Tile (MVT) format. While MVT has served the mapping ecosystem well for over a decade, it also carries historical constraints that limit interoperability, formal specification quality, extensibility, and independence from proprietary platforms. As MapLibre continues to grow as the central open-source foundation for web-based map rendering, it has become increasingly clear that a future-proof, openly specified, and collaboratively designed tile format is essential.
This talk will offer a detailed look into why we initiated this engineering effort and what gaps the new format aims to close. I will explain the core design principles behind the specification—clarity, strictness where needed, optionality where useful, and full transparency throughout the process. Attendees will gain a technical understanding of how the format works, including its data model, feature encoding strategy, metadata approach, and compatibility considerations for existing infrastructure.
Beyond the current specification draft, I will outline the major areas still under active development. These include discussions around schema evolution, advanced geometry representations, compression strategies, and interoperability with raster, elevation,, 3D and non-geographic datasets. I will also provide insight into the collaborative workflow between maintainers, researchers, vendors, and the wider open-source community, highlighting where contributions and feedback are particularly welcome.
Finally, the talk will cover how the rollout is progressing in practice. This includes early tooling support, reference implementations, testing frameworks, and real-world trials by organizations exploring migration paths away from MVT. The session will present an honest, up-to-date snapshot of the project’s status and a forward-looking roadmap for the next stages of development, helping the community understand both what is ready today and what is still on the horizon.
“Stratigraphy Explorer: An Open-Source, Web-Based Stratigraphic Information System for India Leveraging Open Geospatial Data”
Damini khairwar;
Academic Track (Oral)
The increasing availability of open geospatial data and rapid advancements in open-source geospatial technologies have created significant opportunities for developing interactive educational and analytical systems in the geosciences. Traditionally, geological and stratigraphic datasets have been disseminated through printed maps, field sheets, and static digital documents. While useful, these formats limit accessibility, lack dynamic interaction, and constrain the ability to visualize spatial relationships between geological formations and mineral occurrences. Recognizing these limitations, a Web-based Stratigraphic Information System called Stratigraphy Explorer was developed to support interactive learning and analysis of India’s geological formations and mineral resources through spatial visualization.
Stratigraphy, the study of rock layers and their spatial and chronological relationships, is fundamental to interpreting Earth’s geologic history. India’s stratigraphic framework is highly diverse, ranging from the ancient Archean cratons of the Peninsular Shield to the young Himalayan sequences. However, many students struggle to conceptualize stratigraphic relationships due to limitations of static maps, restricted field exposure, and difficulty interpreting three-dimensional structures using printed visuals. Access to updated geological maps is limited, and conventional learning materials lack interactivity, making it difficult to link stratigraphic units with real-world locations, topographic expression, and associated mineral occurrences. These challenges highlight the urgent need for dynamic GIS-enabled learning environments that integrate spatial and descriptive geological knowledge.
The system incorporates open geological and mineral datasets sourced from the Bhukosh portal of the Geological Survey of India, which provides standardized and interoperable datasets essential for academic research and classroom training. Bhukosh supplies nationwide layers including geological formations, lithological units, structural information, tectonic features, and mineral occurrence data. Availability of such authoritative datasets enables transparent, reproducible, and collaborative development of geospatial learning systems.
The development of Stratigraphy Explorer followed a comprehensive workflow built entirely using open-source geospatial technologies. Geological and mineral datasets were acquired from Bhukosh and preprocessed in QGIS through geometry correction, attribute field restructuring, clipping, and stratigraphic classification to ensure accuracy and consistency. The refined datasets were imported into PostgreSQL/PostGIS, where spatial indexing, topology validation, and query optimization ensured efficient storage and rapid retrieval. GeoServer was configured to publish datasets as OGC-compliant WMS and WFS layers, enabling seamless interoperability for web visualization. On the client side, OpenLayers was used to build a browser-based interactive interface with customizable layer controls, attribute filters, spatial querying tools, and dynamic mineral overlays. Custom JavaScript functions compute predominant formations and generate attribute summaries to support user-driven exploration. The interface, designed in HTML/CSS, provides a clean and responsive layout supported by a configurable base map including Google Maps and Google Satellite layers, enabling users to precisely locate geological units and observe their surface expressions.
Key capabilities of Stratigraphy Explorer include multilayer filtering of geological formations based on state boundaries, geologic age, and lithological categories, along with automated visualization of predominant formations within selected regions. The system supports overlay of mineral deposit layers with filtering by mineral type and administrative boundary, enabling users to investigate spatial relationships between rock types and mineralization patterns. For each selected unit, concise descriptive text supports interpretation and conceptual learning, effectively bridging theoretical geology with spatial visualization.
The integration of Google Maps and Satellite base layers is a major functional enhancement that allows users to compare geological datasets with real terrain features and landscape morphology. This provides important educational advantages, as the ability to visually correlate geology with topography cannot be achieved through conventional printed maps or PDF-based GIS materials.
Stratigraphy Explorer demonstrates the potential of open-source geospatial technologies to build accessible, scalable, and cost-effective academic tools without reliance on proprietary platforms. The system promotes open scientific education, encourages inquiry-based and self-paced learning, and supports decision-making in natural resource evaluation, geological mapping, and environmental planning. The initiative highlights how open data and open-source technologies can democratize geoscience education and make high-quality spatial resources accessible to institutions with limited technical infrastructure.
Future enhancements include integrating district-level geological datasets, Survey of India toposheet references, and geotagged field photographs to strengthen field-based learning. Additional features such as virtual field trip modules, geoheritage and geotourism site information, and interactive map-based quizzes are planned to enhance experiential learning and practical skills. These improvements aim to expand the system into a comprehensive digital learning ecosystem capable of supporting classroom instruction, independent study, and field-based geological training.
In conclusion, Stratigraphy Explorer provides an accessible, dynamic, and technically robust approach to visualizing stratigraphic and mineral resource information. The system enhances geoscience education and analytical research by enabling users to interactively explore geological datasets with spatial accuracy. It presents a replicable framework for institutions seeking to integrate open-source Web-GIS technologies into modern geoscience education and collaborative resource management.
“Strengthening Governance in North East India through FOSS”
Nilay Nishant;
General Track
India's North Eastern Region (NER) is experiencing rapid transformation through centrally funded initiatives including the North East Special Infrastructure development Scheme (NeSIDS), Prime Minister's Development Initiative for North-East Region (PM-DEVINE) and many more schemes of Govt of India. However, effective governance faces significant challenges from the region's rugged mountainous terrain, dense forests, dispersed settlements, and poor connectivity, all compounded by frequent natural disasters such as floods, landslides, and earthquakes. These conditions necessitate safe, adaptive, scalable, and interoperable spatial intelligence systems to enable informed planning, optimized resource allocation, and efficient service delivery. Free and Open-Source Software (FOSS) tools provide a critical foundation for building cost-effective, transparent, and collaborative smart governance ecosystems in NER.
In response to these challenges, a Department of Space initiative sponsored by the Ministry of the Development of North East Region (DoNER) developed the North Eastern Spatial Data Repository (NeSDR), a comprehensive space-based governance platform. Built on a FOSS technology stack, NeSDR delivers curated GIS layers spanning administrative boundaries, land resources, water systems, climate data, infrastructure, utilities, terrain, and disaster management datasets. The platform supports 126 region-wide geospatial products and various applications, facilitating over 90,000 individual data downloads. With more than 65 lakh (6.53 million) user visits recorded, these metrics demonstrate both the system's extensive reach and its critical role in democratizing access to authoritative spatial data resources for strengthened decision-making across NER.
The platform is built on a robust three-tier service-oriented architecture leveraging entirely open-source technologies. At the data layer, PostgreSQL with PostGIS extensions provides enterprise-grade spatial database capabilities, enabling complex geospatial queries and efficient data management. The middleware layer utilizes PHP for application logic and API services, while GeoServer serves as the primary map server, implementing OGC-compliant WMS, WFS, and WCS services for seamless interoperability. The presentation layer employs ReactJS for building responsive, dynamic user interfaces that deliver an intuitive user experience across devices.
To address the unique challenges of processing high-resolution satellite imagery and large-scale raster datasets common in the NER's remote sensing applications, a custom raster server has been developed. This specialized component enhances tile processing performance and supports advanced raster algebra operations, enabling on-the-fly processing of satellite imagery without requiring pre-generated tile caches. The architecture also integrates OpenLayers for interactive web mapping, GDAL for raster data processing, and various Python-based geospatial libraries for automated workflows. This comprehensive FOSS stack ensures vendor independence, cost-effectiveness, and the flexibility to customize solutions for NER's specific governance requirements.
Platform data powers several custom governance applications, including the NEC/MDoNER Project Monitoring Portal for tracking critical development infrastructure across 16 projects and 1,664 physical sites, the NavIC mobile navigation tools, the e-ATLAS election management platform (operational since 2018 across multiple states and national elections with over 3,000 officials participating), and the Assam Alerting System for integrating warnings from IMD, CWC, and MOSDAC to support incident-response coordination. The SMART-AXOM multi-hazard early warning system provides intelligent routing to tourist circuits, navigation services, and amenity information, while the Geo-Tuition platform delivers interactive educational resources. The FRAMS system supports machine learning-based deforestation detection and automated alert generation for fire and health hazards, including AI-based malaria risk mapping (2018-2023) and automated mosquito species detection, advancing surveillance for remote and high-risk communities.
NeSDR has enabled substantial capacity-building efforts through 30+ training programs reaching 2,500+ officials, cultivating a skilled ecosystem capable of managing open-source geospatial technologies for sustainable regional development.
“The CoRE stack architecture: Computationalizing landscapes for programmers”
CoRE stack, Kapil Dadheech, Aman Verma, Nirzaree;
General Track
The CoRE Stack (Commoning for Resilience and Equality) is a community-based digital public infrastructure consisting of datasets, algorithm pipelines, and user-facing tools, that can be used by rural communities and other stakeholders to improve the sustainability and resilience of their local landscapes. The stack broadly consists of three layers. First, datasets comprised of novel geo-spatial layers on changes over the years in cropping intensity, water-table levels, health of waterbodies, forests and plantations, and welfare fund allocation, among others, sourced from multiple contributors or built using open ML models operating on satellite data. Second, rich analytics on diverse socio-ecological indicators through scientifically validated monitoring and modelling methodologies and algorithms. Third, digital tools and dashboards that enable communities to build a shared understanding about their landscape, align on informed action to improve the resilience and sustainability of their landscape, monitor progress, report insights and aid collective decision-making.
With a view to describing any place – village or watershed or an individual pixel – in terms of variables that capture the history of the place (time-series of rainfall, land-use, tree cover, water balance, etc.) as well as the broader context in which it is located (river basin, climatic zone, proximity to large waterbodies, forests, cities, etc.), the CoRE stack exemplifies a new approach to geospatial programming.
Further, the architecture that enables these datapoints to be computed takes a systematic approach of having built a large set of computational pipelines that are chained together in a Directed Acyclic Graph (DAG) and executed, in-part, either locally or through computation triggered on platforms like Google Earth Engine or GPU servers. The analytics and geospatial layers are then pushed to a GeoServer instance from where they are served to mobile applications and web dashboards. In the talk, we will outline this architecture, challenges we faced in creating a robust and well-managed data setup, and open questions and methodologies on which we are seeking collaboration from the wider community.
“Tile serving with MapLibre/Martin/Planetiler - base and overlays Workshop”
Frank Elsinga;
Workshop Proposals
Create a tile server with the base map and some custom data. Build a web site with both the base map and custom data using MapLibre GL+Martin+PG+Planetiler+osm2pgsql+...
In this workshop we will generate base map tiles from OSM data using Planetiler, set up Martin tile server, set up nginx to serve our sample web site that will use MapLibre GL JS to show the map. Additionally (time permitting), we will add a PostgreSQL server, and will use osm2pgsql to import extra data from the same OSM dump, and do on-the-fly tile generation from PG.
What topics do we plan to cover in your workshop? –
* generating base maps
* setting up postgres with data
* generate overlay tiles on the fly
* serving tiles
* visualizing tiles with MapLibre
* adding data layers
By the end of the workshop, participants will have hands-on experience with the complete pipeline—from raw OSM data to a fully interactive web map—including managing custom data, combining multiple tile sources, and optimizing the stack for performance, enabling them to confidently build and extend their own open-source mapping solutions.
See https://github.com/maplibre/workshop?tab=readme-ov-file#pre-reqs for pre-requisites
“Toolkit for Radar Analysis and Classification for Education (TRACE)”
SIDDHARTH YADAV;
Academic Track (Oral)
Toolkit for Radar Analysis and Classification for Education (TRACE)
Siddharth Yadav [1], Siddharth Nair [2], N. A. Anjita [1] and J. Indu [1]
[1] Department of Civil Engineering, Indian Institute of Technology Bombay, Powai, Mumbai, India
[2] Department of Civil Engineering, Indian Institute of Technology Madras, Chennai, India
Doppler Weather Radars (DWRs) serve as one of the most reliable tools for observing atmospheric phenomena, providing large volumetric datasets that capture precipitation intensity, storm structure dynamics, and wind fields with high spatial and temporal resolution. These datasets are generally stored in the NetCDF format, are inherently large and complex, which requires robust analytical and visualization frameworks for effective utilization.
Globally, several open-source initiatives have transformed radar data processing. Notable examples include the Python ARM Radar Toolkit (Py-ART), which supports volumetric visualization, gridding, and storm structure analysis. Similarly, wradlib, an open-source Python library, provides functionalities for radar-based hydrological research, addressing challenges such as data correction, and geo-referencing. While in India, the Python Indian Weather Radar (pyiwr) toolkit has been tailored to local DWR data formats for reflectivity analysis and quantitative precipitation estimation.
Despite these advancements, significant challenges persist, as several existing radar toolkits lack cross-platform interoperability, and limiting seamless integration with diverse datasets. Additionally, many toolkits were originally designed for temperate regions, with limited adaptability to the convective and monsoonal weather patterns characteristic of tropical regions like South Asia. These limitations underscore the need for an open-source solution that integrates research-based data processing with interactive educational tools, enabling a broader audience, from students to scientists, to effectively explore and interpret weather radar data.
To address these gaps, the Toolkit for Radar Analysis and Classification for Education (TRACE) was developed using Python frameworks as an open-source, integrated, and interactive platform for learning and research purpose. TRACE has been designed to empower geospatial innovation by bridging theoretical radar meteorology with practical data analysis, simulation, and visualization. The toolkit facilitates the ingestion and analysis of NetCDF-based radar datasets from ISRO’s MOSDAC archive, allowing users to connect theory with real-world meteorological data. TRACE stands out among existing radar toolkits through its dual emphasis on conceptual learning and advanced research application. It includes simulation modules that illustrate fundamental radar concepts- such as beam antenna pattern, 3D radar scan visualization, and reflectivity simulation for conceptual understanding of radar scanning. Its interactive interface enables users to explore radar concepts through parameter manipulation, and develop a deeper understanding of how atmospheric processes are represented in radar data.
TRACE developed using Free and Open Source Software (FOSS) tools such as Python, and VS Code, embodies the principles of transparency, accessibility and collaboration, the core values that resonate with the spirit of FOSS4G. Built using open-source Python libraries, TRACE can be customized, and integrated with other geospatial frameworks, hence encourages active collaboration among educators, students, and researchers. Beyond education, TRACE offers practical benefits for meteorological research through visualization and classification of hydrometeor types. The toolkit’s adaptability further makes it relevant for applications in urban hydrology, and climate impact assessment, where radar-based precipitation data are essential.
“Web-GIS–Enabled Crime Mapping and Analysis for Indore Police: An Open-Source Initiative”
Shailesh Chaure;
General Track
Geographic Information Systems (GIS) are widely used in developed countries for crime mapping, spatial analysis, and predictive policing. In India, however, GIS adoption in policing remains limited, with only a few law enforcement agencies meaningfully integrating spatial technologies into crime analysis. Bengaluru Police use the GIS-enabled Crime Mapping, Tracking, and Analysis System (CMTAS) for monitoring crime patterns and guiding patrols. Mumbai Police employ GIS within CCTNS to geo-tag FIRs and identify hotspots. Chennai Police, through Smart City initiatives, use GIS dashboards to analyse crime clusters and improve surveillance. Hyderabad and Delhi Police have also experimented with GIS platforms for traffic monitoring, emergency response, and sensitive zone mapping.
In the broader national landscape, two major systems—the Crime and Criminal Tracking Network & Systems (CCTNS) and the Dial-100 Emergency Response System—generate large volumes of operational crime and incident data. CCTNS standardizes FIR records, criminal histories, and investigation workflows across the country, with built-in provisions for geo-tagging. However, its spatial components remain largely underutilized, as the platform focuses primarily on database management rather than spatial analysis. Dial-100, which captures real-time distress calls and incident locations, also holds significant potential for temporal and spatial pattern analysis but is typically limited to basic dashboards. Integrating both datasets into robust GIS platforms—particularly open-source systems—could dramatically strengthen real-time situational awareness, hotspot detection, patrolling optimization, and evidence-based decision-making.
Motivated by the need for such capabilities, and as a strong advocate of open-source GIS technologies, I initiated the development of a comprehensive web-based crime mapping and analysis solution for the Indore Police. Indore—the largest city of Madhya Pradesh and a major educational and commercial hub—comprises 32 police station (Thana) circles. Although FIRs and crime reports routinely record geo-coordinates of incidents, no user-friendly system existed for interactive visualization, spatial querying, or analytical mapping. To address this gap, a detailed proposal outlining the design of a web-based GIS platform—built using PostgreSQL/PostGIS, GeoServer, and OpenLayers—was prepared and presented to the Indore Police.
The proposal was well-received, and development of the system commenced soon after. A fully functional web application was subsequently built by me and my team of postgraduate students. Thana boundaries were digitized with the assistance of police officials, and crime location data from all 32 Thanas were standardized and consolidated into a centralized PostgreSQL/PostGIS database. The development process was highly collaborative: my postgraduate students contributed significantly to database creation, digitization of administrative boundaries, interface design, and module testing, while officials from the Indore Police provided operational insights, validated spatial data, and supported the refinement of analytical workflows. This joint effort ensured that the platform remained technically sound, operationally relevant, and user-friendly for field-level policing needs.
Key Features of the Application:
• Interactive visualization of crime incidents across 17 crime categories, viewable at zone, sub-zone, and Thana levels.
• Advanced filtering tools enabling analysis by date, time, crime type, and administrative boundaries.
• Heat map generation to visualize spatial crime intensity patterns and hotspot clusters.
• Automated report generation with integrated charts, tables, and summaries.
• Identification and display of CCTV and citizen-eye camera locations along selected roads.
• Graduated colour maps highlighting overall spatial crime distribution trends across Indore.
The system is currently deployed on a physical server within the police department. Crime data for the year 2024 has been fully uploaded, and data for the current year is being cleaned, standardized, and integrated into the database.
The most significant constraint encountered was inconsistency in source data, as crime records were compiled using mixed workflows—manual entries in MS Excel and a third-party desktop application. To address this, work is underway to develop a unified, web-based data entry system for all police stations in Indore. Beginning in 2026, this enhanced system will ensure standardized data generation, minimize errors, and enable faster, seamless updates to the web-GIS platform.
With the growing volume of standardized, multi-year crime data being added to the system, a major planned enhancement is the incorporation of crime prediction and forecasting capabilities. Once a substantial historical dataset is available, machine learning models—such as Random Forest, Gradient Boosting, or space-time hotspot prediction algorithms—can be trained to identify emerging crime trends and forecast potential hotspots. Integrating these predictive analytics into the web-GIS platform will shift policing from a reactive to a proactive model, enabling strategic deployment, targeted patrolling, and early-warning alerts for high-risk areas.
The Indore Police Department has shown strong interest in the system, which is already being used for crime pattern analysis, patrolling route planning, hotspot identification, and resource prioritization. This initiative marks an important step toward mainstreaming open-source GIS solutions in urban policing and strengthening spatial decision support systems in India.
“Why OpenStreetMap is the Essential Data Layer for AI, Logistics, and Smart Cities”
Brazil Singh;
General Track
This presentation will demonstrate that for Asia's rapidly accelerating digital transformation, OpenStreetMap (OSM) is not merely a free map, but the essential, foundational data layer for future-proofing the continent's most critical growth sectors: Artificial Intelligence, Logistics, and Smart City development. From a strategic viewpoint, the OpenStreetMap Foundation and the global OSM ecosystem represent a critical, open infrastructure that de-risks development, democratizes innovation, and provides the only globally consistent, community-verified geospatial dataset capable of scaling with Asia's ambitions.
We will move beyond the common perception of OSM as a simple cartographic tool to reframe it as a dynamic, living data ecosystem. This ecosystem is the key to unlocking innovation in three core domains:
Fueling Artificial Intelligence: AI models are only as good as the data they are trained on. OSM provides an unparalleled source of real-world, structured geospatial data, spanning millions of points of interest, detailed land-use polygons, and intricate infrastructure networks. We will explore how this data is fundamental for training machine learning models in everything from autonomous vehicle perception and hyperlocal advertising to generative AI platforms that can translate natural language queries (e.g., "Find all hospitals with ambulance access in Dhaka") into complex spatial analysis, making geospatial intelligence accessible to non-experts for the first time.
Optimizing Next-Generation Logistics: In Asia’s dense urban centers and complex archipelagos, the "last-mile" problem is the "every-mile" problem. Proprietary mapping solutions are often costly, inflexible, and slow to update. OSM’s community-driven model provides the granular, up-to-the-minute detail that modern logistics engines crave, from new alleyways and one-way streets to data on cargo bike-specific paths and e-scooter-friendly zones. We will present case studies where OSM is used as the core routing layer for complex Vehicle Routing Problems (VRP), enabling businesses to build more efficient, sustainable, and cost-effective supply chains.
Building Resilient Smart Cities: A smart city is a data-driven city. OSM provides the foundational "digital twin" of the urban environment upon which all other data layers (e.g., IoT sensor data, demographic statistics, climate models) can be integrated. We will demonstrate how city planners and emergency responders are using OSM as the authoritative base map for critical applications, including 15-minute neighborhood analysis, climate adaptation planning, public transport optimization, and disaster response coordination, all without the prohibitive costs and vendor lock-in of commercial datasets.
“Workflow Automation with QGIS: Tips and Tricks”
Ujaval Gandhi;
General Track
This talk will do a deep-dive into QGIS Processing Toolbox for building fast, automated and reproducible workflows. We will cover real-world use cases and share tips and tricks to help you leverage the no-code framework provided by QGIS.
The talk will feature real-world case studies showcasing use of the QGIS processing framework. GIS Workflows typically involve many steps - with each step generating intermediate output that is used by the next step. If you change the input data or want to tweak a parameter, you will need to run through the entire process again manually. Fortunately, Processing Framework provides a way to define your workflow and run it with a single invocation. You can also run these workflows as a batch over a large number of inputs.
QGIS processing framework offers Batch Processing, Model Designer, and the Command Line Utility as the no-code solutions for analysts looking to automate their work. These tools allow you to take hundreds of native and 3rd party processing algorithms and build workflows for spatial analysis and map publishing.
This talk is ideal for participants who already use QGIS and want to take their skills to the next level. The talk will also cover advanced features that will make you more productive. Join this talk to learn new tricks to make the most out of the world's most popular open-source desktop GIS.
“Workshop on Web GIS Application Development Using GeoServer and OpenLayers”
Shailesh Chaure;
Workshop Proposals
With the increasing reliance on spatial information for planning, governance, and research, Web GIS has emerged as a vital platform for real-time visualization, sharing, and analysis of geospatial data. This workshop will introduce participants to the core concepts, tools, and workflows required to develop web-based GIS applications using open-source technologies such as GeoServer, OpenLayers, and PostGIS. Designed as a fully hands-on training program, it will guide participants through the complete process—from preparing and publishing spatial data in GeoServer to visualizing it interactively on a web map using OpenLayers. By the end of the program, attendees will have developed a basic yet functional Web GIS application capable of displaying spatial data layers seamlessly within a web browser, offering a strong foundation for more advanced applications.
The workshop is structured into five modules, beginning with an introduction to Web GIS architecture, the client–server model, and OGC standards that enable interoperability across platforms. This includes an overview of key web services such as Web Map Service (WMS) for rendering map images and Web Feature Service (WFS) for accessing vector feature data, which form the backbone of most modern Web GIS applications. The subsequent modules delve into configuring GeoServer, publishing vector data, styling layers, building dynamic web maps using the OpenLayers JavaScript library, and integrating these components into a simple, cohesive Web GIS interface. The concluding module features demonstrations of real-world applications, interactive discussions, and an open question–answer segment aimed at helping participants apply the acquired skills to their own projects and research needs.
Participants are expected to have a basic understanding of GIS concepts such as layers, projections, coordinate systems, and spatial data formats. While prior exposure to databases, HTML/JavaScript, or web mapping tools is useful, it is not mandatory. The workshop is designed to be accessible to students, researchers, and faculty from geography and allied disciplines, as well as professionals working in GIS, remote sensing, urban planning, environmental studies, or spatial data management. Government officers, planners, and decision-makers interested in adopting open-source Web GIS solutions for efficient data-driven planning will also find the workshop highly beneficial.
Participants are required to bring their personal laptops for the hands-on modules. Software such as QGIS, PostgreSQL/PostGIS, GeoServer (2.28 stable version), and Notepad++ will be required; however, pre-installation is not mandatory, as installation support and troubleshooting guidance will be provided during the workshop. Additional sample datasets and learning materials will also be supplied to ensure a smooth, practice-oriented learning experience.
The workshop will be conducted over a total duration of four hours, with each module allocated approximately 35–50 minutes, ensuring a balanced mix of conceptual understanding, demonstrations, and hands-on practice.
“YIELD ESTIMATION OF POTATO CROP USING UAV IMAGERY AND MACHINE LEARNING ALGORITHMS”
Sudipta Poudel, Aman;
Academic Track (Oral)
Agricultural yield estimation plays a pivotal role in enhancing food security, ensuring effective resource allocation, and supporting decision-making in precision agriculture. In countries like Nepal, where smallholder farming is predominant and traditional yield estimation techniques remain manual, labour-intensive, and time-consuming, there is a critical need for modern, accurate, and scalable approaches. This study introduces a data-driven framework for estimating potato yield by integrating Unmanned Aerial Vehicle (UAV) remote sensing with machine learning algorithms.
The research was conducted in a 3.7-hectare agricultural field in Dhulikhel Municipality, Kavrepalanchok District, Nepal. High-resolution imagery was acquired using both RGB and multispectral sensors mounted on UAV platforms across five key phenological stages of the potato growth cycle. To ensure spatial accuracy and reliable data, nine Ground Control Points (GCPs) were established using Differential GPS, and 80 georeferenced sample plots were used for systematic ground truthing. Vegetation indices (VIs) derived from UAV imagery including NDVI, EVI, CIrededge, VARI, and others were computed and statistically analysed to assess their correlation with actual yield data.
A total of twenty vegetation indices were extracted, and both simple and multiple linear regression models were applied to determine their predictive capabilities. Seven advanced machine learning algorithms support Vector Machine (SVM), Random Forest, XGBoost, AdaBoost, Decision Tree, Gradient Boosting, and K-Nearest Neighbors (KNN) were trained and tested for yield prediction performance. The multi-temporal multiple linear regression model, incorporating indices from various stages of crop development, achieved the highest performance with an R² value of 82.61% and a predicted R² of 77.54%. Among the machine learning models, Gradient Boosting outperformed others with an R² of 76.62% and RMSE of 0.1526, indicating high prediction accuracy.
The study also revealed that multispectral-based vegetation indices had greater predictive power than those derived from RGB imagery, underlining the advantage of using multispectral UAV sensors in precision agriculture. For example, the Chlorophyll Index Red Edge (CI1) showed significantly higher correlation with yield than the RGB-based VARI index. These results confirm the relevance of spectral data in capturing the physiological characteristics of crops that are closely related to productivity. Throughout the research, a comprehensive set of data was generated, including high-resolution orthomosaics for each growth stage, digital surface models (DSMs), digital terrain models (DTMs), vegetation index maps, plant height models, and yield sample polygons linked with ground-truth yield measurements. These datasets enabled rigorous statistical analysis, model training, and yield mapping, forming a valuable resource for future research and decision-making in precision agriculture.
The overall findings suggest that combining UAV-acquired spectral data with advanced analytical techniques provides a reliable, non-destructive, and spatially explicit method for crop yield estimation and health monitoring. This integrated approach has the potential to transform traditional agricultural practices in Nepal by enabling timely interventions, optimizing resource use, and supporting smallholder farmers with actionable insights.
“Zarr for ARD and Geodatacubes – A Use case for Analyzing Forest Fires”
LSI_IIIT_Hyderabad;
Workshop Proposals
Advancements in Earth Observations (EO) technology have led to an exponential growth in the amount of data that is being collected. This has opened new avenues for observing and understanding the earth’s surface features and processes. However, this unparalleled growth in EO data has posed several challenges in terms of efficient data storage and lack of standard methods for data dissemination. EO data is typically collected from heterogenous sensors with varied spatial, spectral, and temporal resolutions which make their systematic integration a difficult task, till today. Rather than directly leveraging the data for analysis, users are required to spend a significant amount of time to pre-process these huge datasets to bring them into a homogeneous, analysis-ready form – thereby, not fully optimizing the advantages offered by the availability and value of the EO data. Owing to these newly posed challenges, there is a growing demand from the EO community for developing Analysis-Ready Data (ARD) for various domain applications. To this end, EO ARD brings together a diverse set of application-specific, heterogeneous datasets in a uniform way through an N-dimensional Geo-Datacube. For instance, resampling all the required datasets into a uniform resolution.
By definition, a Geo Datacube is a multi-dimensional array of scientific measurements where each dimension represents a physical form such as latitude (space), time, temperature or bands or pressure levels etc. The Zarr storage format is found to be suitable and preferable, especially within the geospatial community, for storing N-dimensional geo-datacubes owing to its simplicity, scalability and compatibility with the cloud object storage. It also has in-built compression algorithms and gets rapidly processed for parallel I/O operations. This is beneficial for the scientific community as the heavy lifting of complex pre-processing is already done by the data provider, significantly reducing big data processing efforts from the user’s end. For example, an image processing expert would be able to gain easy access to and use the relevant data without going through complex and time-consuming pre-processing tasks.
Given this background, this workshop is aimed at introducing the participants to the Open Geospatial Consortium (OGC) standards in relation to efficient Datacube storage and dissemination. Specifically, we propose a 240-minute workshop, comprising of four key components:
1. Concepts of ARD and Geo-datacubes
2. Understand/learn about Zarr storage format for N-dimensional Datacube through Python
3. Hands-on on data dissemination through OGC –API implementation such as pygeoapi, performing OLAP operations such as slicing, dicing, roll-up, etc.
4. Using forest fire as a running use case to understand the utility of such ARD datacubes in real-world scenarios.
Forest fires are as old as the earliest human civilizations that have existed. They play a significant role in altering the social and biogeographical landscapes across the globe, based on their historical occurrences and physical characteristics. Today, with the help of geospatial data acquired on historical global fires over the past 2-3 decades – comprising of details such as the location of active fires, intensity (fire radiative power), burnt area extents and also their environmental drivers, it is possible to model the fire danger associated with any region given the climatic conditions, space and time. The proposed running use case of ‘forest fires’ as part of the workshop will help the participants understand and apply how data assembled from multiple space-based missions and relevant weather/climate data records can processed, manipulated and analyzed to derive useful insights for fire mitigation and management.
Note: As a pre-requisite, a minimal understanding of different geospatial datasets and some familiarity of Python programming language is required. The instructions about the pre-installed libraries and tools will be provided before the workshop. By the end of the workshop, the participants will be able to take any spatiotemporal data of their domain, store it as a Zarr storage format, and disseminate this data through the standard interface such as pygeoapi.