auto_explain SQL tuning
Leverages Postgres auto_explain logs to deliver prioritized SQL optimization heuristics.
SQL Tuning Automation
Turn noisy PostgreSQL logs into SQL optimization insights that prioritize tuning work, surface AI guidance, and help teams document how every Postgres query gets faster.
SQL Tuning Intelligence
Inspects EXPLAIN and auto_explain output to score SQL tuning priorities with AI context.
Optimization Playbooks
Static dashboards plus ready-to-run SQL optimization sequences anyone on the team can follow.
PostgreSQL-Aware AI
Hints respect your Postgres configuration, hardware limits, and reliability guardrails.

Context System
AIQO ships with real-world context layers that capture hardware limits, database traits, workloads, and operational routines.

Context schema diagram
Folder structure
CONTEXT/
DDL.txt
CONFIG.txt
PROJECT.txt
EVENTS.txtContext example
# CONTEXT/PROJECT.txt
- Storage is high performace SSD in storage bay
- Database Engine is PostgreSQL 17, single node no replication, virtualized
- RAM is 64GB
- 16 vCores are avalaible
- pg_repack is scheduled daily in order to keep tables' fragmentation under 12%
- when you propose an index creation use the CONCURRENTLY clause and prefix the name by usr_idx_Context example
# CONTEXT/CONFIG.txt
allow_in_place_tablespaces | off | Allows tablespaces directly inside pg_tblspc, for testing.
allow_system_table_mods | off | Allows modifications of the structure of system tables.
application_name | psql | Sets the application name to be reported in statistics and logs.
archive_cleanup_command | | Sets the shell command that will be executed at every restart point.
archive_command | (disabled) | Sets the shell command that will be called to archive a WAL file.
archive_mode | off | Allows archiving of WAL files using archive_command.
archive_timeout | 0 | Forces a switch to the next WAL file if a new file has not been started within N seconds.
array_nulls | on | Enable input of NULL elements in arrays.
authentication_timeout | 1min | Sets the maximum allowed time to complete client authentication.
...Report Samples
Query Metrics Sample
Query metrics are shown on top with contextual help.
| Shared Hit: | 48 788 114 (381 157,14 MB) |
| Shared Read: | 7 099 870 (55 467,73 MB) |
| Shared Dirtied: | 4 976 424 (38 878,31 MB) |
| Shared Written: | 2 641 785 (20 638,95 MB) |
| Records: | 12 972 627 |
| FPI: | 4 578 753 |
| Bytes: | 34 301 775 104 (32 712,72 MB) |
AI Insights Sample
Real AI insights generated by AIQO for a heavy PostgreSQL query with links to PostgreSQL params documentation.
Description:
This is an UPDATE on data_vault.tbl_alpha using a complex subquery that aggregates and deduplicates data from several large tables (tbl_alpha, tbl_beta, tbl_gamma, tbl_delta), with multiple joins and a heavy use of sorting, aggregation, and memoization. The plan shows very high buffer usage, significant disk I/O, and large temporary files, indicating a resource-intensive operation on a massive dataset.
Initial summary
HashAggregate node is a major cost and memory driver, processing nearly 20M rows and spilling to disk (632GB temp usage).Sort node processes over 41M rows, using external merge sort with 2.2TB of temp files, indicating insufficient memory for in-memory sort.Seq Scan on tbl_alpha reads 8.6M pages, showing a full table scan on a very large table.Index Only Scan on tbl_beta is efficient but still processes 16M rows, with 372k heap fetches.Index Scan on tbl_gamma (via Memoize) is executed 17M times, with 11M buffer dirties and 432k writes, showing heavy random access.Detailed diagnosis
associe_interlo_coord_red and the need to deduplicate a huge intermediate result set.
work_mem is insufficient for the sort workload, causing heavy disk I/O.
Optimization paths
1. Indices
tbl_alpha for join/filter columns:
tbl_alpha followed by a hash join on col_beta_id and a nested loop join on col_delta_id and col_gamma_id. There is no composite index covering these columns.
CREATE INDEX CONCURRENTLY idx_tbl_alpha_beta_gamma_delta ON data_vault.tbl_alpha (col_beta_id, col_gamma_id, col_delta_id);2. Configuration parameters
work_mem for this session/query:
Sort and HashAggregate nodes spill to disk, with temp files exceeding 2TB. While work_mem is already set to 1GB, this query's sort/aggregate operations are exceptionally large.
3. Data architecture
fillfactor for tbl_alpha:
fillfactor is at 70 or lower to reduce page splits and improve update performance. This is already a best practice for large, update-heavy tables.
Trends Graph Snapshot
Graph view extracted from the generated report to highlight node cost.

Pev2 Viewer Snapshot
The Pev2 UI embedded into the HTML bundle for interactive plan inspection.

Features
Every module is built to accelerate SQL tuning and SQL optimization across Postgres workloads—linking plan data, AI insights, and checklists your team can reuse release after release.
Leverages Postgres auto_explain logs to deliver prioritized SQL optimization heuristics.
Responsive layout deployable anywhere, bundling SQL tuning results in a single HTML + Vanilla JS file.
Context stacks capture workload, drivers, schema, and operational constraints for precise recommendations.
Filter by sequential scans, index usage, AI budget, or query name to focus on the riskiest SQL.
Queries are normalized and assigned a stable code to track optimization work over time.
Reports and prompts ship in EN/IT/ES with automatic fallback for distributed teams.
Deterministic prompts for models that support reproducibility flags.
Ships the open source Pev2 plan viewer for instant EXPLAIN insights.
Prebuilt binaries for macOS, Linux, and Windows with identical CLI flags.
SQL tuning
Follow a clear path from plan capture to deploy-ready SQL optimization work. AIQO keeps SQL tuning front and center while still respecting the operational details that make every Postgres environment unique.
Highlight plan instability, stale statistics, and parameter-sensitive SQL that drives production latency.
Map AI hints to concrete actions such as index creation, query rewrites, or configuration tuning across Postgres fleets.
Export checklists, compare runs, and align platform, data, and AI teams on a single SQL optimization narrative.
Keep the essentials in one glance whenever you run a SQL optimization session.
How it works
Designed for data, platform, and performance teams.
Enable auto_explain with thresholds, buffers, and verbosity aligned to the workload you need to optimize.
Bundle enriched Postgres logs and EXPLAIN plans to build a reliable SQL optimization dataset.
Parse plans, surface tuning heuristics, and export a repeatable SQL optimization playbook.
Installation
poetry install
python src/aiqo_pg_ai_report/pg_autoexplain_analyzer.py pg_ctl.log# export OPENAI_API_KEY=sk-test-3Jf9LwQpNvA2x7TzB1mC8rKdH4eP0qS9nVtY3uR5yWzX-- postgresql.conf
shared_preload_libraries = 'auto_explain'
auto_explain.log_timing = off -- enable only during profiling
auto_explain.log_analyze = on
auto_explain.log_format=text
auto_explain.log_min_duration='900s'Heads-up: log_timing impacts latency. Use it in test environments or with a lower sampling rate.
CLI usage
aiqo-report ./logsaiqo-report ./logs/ --limit-ai-calls 10 --filter update| Argument | Description | Default |
|---|---|---|
| log_file_path (positional argument) | The full path to the PostgreSQL auto_explain log file to be analyzed. | |
| --model | Specify the AI model to use for analysis (e.g., gpt-4o, gemini-1.5-pro, o1-mini). | gemini-2.5-flash |
| --skip-ai-analysis | If set, the AI analysis step will be skipped entirely. A report will still be generated, but without AI-driven insights. | off |
| --limit-ai-calls | Maximum number of AI hint requests | off |
| --filter | Filter log entries. Only log entries containing the specified string in the query name, job name, SQL text, or query code will be processed for AI analysis. |
FAQ
Learn how AIQO approaches SQL optimization for PostgreSQL, how the AI layer behaves, and how you can plug the workflow into your delivery process.
AIQO correlates Postgres auto_explain logs, workload context, and AI reasoning to produce ranked SQL optimization actions that can be executed directly by engineering teams.
Yes. It focuses on PostgreSQL-compatible engines and Postgres forks, adapting hints based on extensions, version metadata, and configuration limits you provide.
You can cap AI calls, review structured prompts, and combine deterministic settings with your internal approval workflow before applying any optimization.
The CLI ingests log bundles from CI pipelines or observability platforms, producing HTML, JSON, or Markdown artifacts you can trace alongside deployments.
For developers
Fully dockerized Angular + Java solution supporting remote log collection for multiple database vendors.
Contact
Write to ivan.rododendro@gmail.com for partnerships, support, or feedback.
Open a ticket on GitHub Issues for bugs or feature requests.