GISPulse vs QGIS Processing: Complementary, Not Competing
QGIS is one of the best open-source GIS software packages. Its Processing framework is powerful, extensible, and free. So why would GISPulse exist? The answer is simple: QGIS Processing was not designed for headless automation, APIs, or CI/CD pipelines. GISPulse was.
What QGIS Processing Does Very Well
QGIS Processing is a geoprocessing framework built on a rich graphical interface. Its strengths:
- Built-in algorithms -- Buffer, clip, union, dissolve, spatial join... hundreds of algorithms available without writing a single line of code.
- Graphical Modeler -- Chain processing steps visually in a drag-and-drop interface.
- Python scripts -- Write custom algorithms in PyQGIS.
- Provider access -- GDAL, GRASS, SAGA, Orfeo Toolbox... all accessible from the same interface.
- Ease of access -- No server configuration. Open QGIS, run a process.
For a GIS analyst working interactively -- data exploration, visual validation, one-shot processing -- QGIS Processing is often sufficient and well-suited.
Where QGIS Processing Reaches Its Limits
The Processing framework was designed for interactivity in a desktop context. This implies several structural constraints:
No Native Headless Mode
Running a QGIS Processing algorithm without a graphical interface is possible via qgis_process (since QGIS 3.14), but:
- Requires a full QGIS installation (X11 or minimal Qt)
- No daemon, no HTTP service
- Environment configuration is fragile in CI/CD
- Startup time is in the 3-10 second range per invocation
# qgis_process — works, but fragile in automation
qgis_process run qgis:buffer -- \
INPUT=/data/parcelles.gpkg \
DISTANCE=100 \
OUTPUT=/data/parcelles_buffer.gpkgNo Native REST API
There is no standard REST API for QGIS Processing. Projects like QgsServer expose WPS, but it is not a modern REST API (no JSON, no webhooks, no async job management).
No Natively Versionable Rules
Processing models are XML files (.model3) or Python scripts. They are not portable declarative configs:
- A
.model3model contains UI geometries, not just logic - A PyQGIS script is imperative code
- Sharing a workflow with a colleague = sharing an XML file or code
No Built-in Scheduling
QGIS Processing has no scheduler. To automate a process every hour, you need a cron + a bash script that invokes qgis_process, which runs into the headless issues mentioned above.
What GISPulse Brings as a Complement
GISPulse was designed for the cases where QGIS Processing is not suited.
Headless by Design
GISPulse is a pure Python engine. No graphical dependencies. Install it with pip and run it on any machine, Docker container, or CI/CD runner:
pip install gispulse
# Immediate processing, no X11, no Qt
gispulse run rules.json --input parcelles.gpkgStartup time: < 200ms in DuckDB portable mode.
Declarative JSON Rules
A GISPulse pipeline is a JSON file that describes processing steps in order:
[
{
"name": "parcelles_buffer",
"capability": "buffer",
"params": {
"input": "parcelles.gpkg",
"distance": 100,
"unit": "meters"
}
},
{
"name": "parcelles_in_risk_zone",
"capability": "spatial_join",
"params": {
"input": "parcelles_buffer",
"ref_layer": "zones_risque.gpkg",
"predicate": "intersects",
"columns": ["niveau", "date_arrete"]
}
}
]This file is:
- Versionable under Git (readable diffs)
- Shareable without dependencies
- Executable by a non-developer via CLI
- Consumable via REST API or Python SDK
Built-in REST API
GISPulse exposes a FastAPI REST API:
# Start the server
gispulse serve --port 8000
# Submit a job
curl -X POST http://localhost:8000/jobs \
-H "Content-Type: application/json" \
-d @rules.json
# Check the status
curl http://localhost:8000/jobs/abc123/statusNative Scheduling
Schedule recurring processes directly in the configuration:
{
"schedule": "0 6 * * 1",
"pipeline": "cadastre_validation.json",
"notify": "webhook:https://hooks.slack.com/..."
}Comparison Table
| Criterion | QGIS Processing | GISPulse |
|---|---|---|
| Graphical interface | Yes (native) | No (separate Web Portal) |
| Headless mode | Partial (qgis_process) | Yes (native) |
| REST API | No (WPS only) | Yes (FastAPI) |
| Declarative rules | No (XML / Python) | Yes (JSON) |
| Git versionable | Difficult | Native |
| Scheduling | No | Yes (built-in cron) |
| Python SDK | PyQGIS (complex) | Minimal SDK included |
| Docker / CI/CD | Fragile | Native |
| GRASS/SAGA/GDAL access | Yes | Via PostGIS + extensions |
| Interactive analysis | Excellent | No (CLI/API usage) |
| Learning curve | Gentle (GUI) | Gentle (JSON) |
The QGIS Plugin: The Best of Both Worlds
GISPulse has an official QGIS plugin that bridges the two ecosystems.
This plugin allows you to:
- Execute GISPulse rules from QGIS -- without leaving your working environment
- Load results directly into the current project -- layers appear in the layers panel
- Edit JSON rules in an assisted interface -- form with autocomplete for capabilities and parameters
- Send a job to a remote GISPulse server -- targeted REST API from QGIS
# From the QGIS Python console
from gispulse.qgis import GISPulseRunner
runner = GISPulseRunner(server="http://gispulse.your-org.com")
result = runner.run("rules/cadastre_validation.json")
iface.addVectorLayer(result["output_path"], "Validation", "ogr")The typical workflow becomes:
- Explore your data in QGIS (graphical interface, visuals)
- Identify the process to automate
- Convert it to GISPulse JSON rules (the plugin helps)
- Deploy the rules to your GISPulse server
- Schedule execution and receive notifications
When to Choose What?
Choose QGIS Processing if:
- You're doing ad hoc exploratory analysis
- You need immediate visual feedback on results
- The process is one-shot and won't be repeated
- Your team is comfortable with the QGIS interface
- You need access to GRASS, SAGA, or Orfeo Toolbox
Choose GISPulse if:
- You need to automate a recurring process (daily, event-driven)
- You want to version your workflows under Git
- You're integrating a process into an ETL pipeline or application
- You need a REST API to trigger jobs
- You're deploying on a server, Docker container, or in CI/CD
- Your team prefers JSON configs over imperative code
Use both if:
- You explore in QGIS, then automate with GISPulse
- Your team has mixed profiles (GIS analysts + data engineers)
- You want a bridge between interactive analysis and production
Migrating a Processing Model to GISPulse
Have an existing QGIS Processing model you want to automate? The migration is generally straightforward.
A typical Processing model (buffer + spatial join):
<!-- model.model3 (simplified XML) -->
<model>
<algorithm id="buffer">
<parameter name="INPUT" value="parcelles"/>
<parameter name="DISTANCE" value="100"/>
</algorithm>
<algorithm id="joinattributesbylocation">
<parameter name="INPUT" value="buffer_output"/>
<parameter name="JOIN" value="zones_risque"/>
</algorithm>
</model>Its GISPulse equivalent:
[
{
"name": "buffer_parcelles",
"capability": "buffer",
"params": { "input": "parcelles.gpkg", "distance": 100 }
},
{
"name": "join_risk",
"capability": "spatial_join",
"params": {
"input": "buffer_parcelles",
"ref_layer": "zones_risque.gpkg",
"predicate": "intersects"
}
}
]The logic is identical. The JSON format is more readable, more compact, and versionable.
Conclusion
QGIS Processing and GISPulse are not competing -- they address different needs in the lifecycle of a spatial workflow.
QGIS is the tool for exploration and interactive analysis. GISPulse is the engine for automation and deployment. The QGIS plugin bridges the gap.
Combining both gives you a modern workflow: explore in QGIS, automate with GISPulse, version under Git.
Get Started with GISPulse
pip install gispulse
gispulse --help