Your Custom Module Works on Your Machine. Production Tells a Different Story.
We audit Odoo codebases every week. The pattern is always the same: a custom module runs fine during development with 50 demo records, but in production with 200,000 sale orders and 40 concurrent users, it grinds to a halt. The developer's first instinct is to add print() statements, restart the server, and stare at the terminal. That approach worked in 2015. Odoo 19 is a different beast — multi-worker processes, the OWL 3 frontend, batched ORM prefetching, and PostgreSQL query plans that behave differently at scale.
Proper debugging in Odoo 19 requires three layers of tooling: interactive debuggers (pdb/ipdb) for stepping through business logic, structured logging for tracing issues across workers, and profilers (cProfile, py-spy, EXPLAIN ANALYZE) for finding the actual bottleneck when "it's slow" is the only symptom.
This guide covers each layer with production-tested techniques — from setting a breakpoint in an ORM compute method to profiling a 30-second report generation down to the exact SQL query responsible. Every code snippet comes from real debugging sessions on client deployments.
Interactive Debugging with PDB, IPDB, and Pudb in Odoo 19
Python's built-in pdb debugger works with Odoo, but there are critical constraints. Odoo 19 runs in multi-worker mode by default in production (workers > 0), which means each request may be handled by a different forked process. You can only attach a debugger to a single-threaded process. For debugging, you must run Odoo in single-worker mode — this is dev-only, never production.
# Method 1: Built-in breakpoint() — Python 3.7+ (preferred)
# Works with any debugger set via PYTHONBREAKPOINT env var
class SaleOrder(models.Model):
_inherit = 'sale.order'
def action_confirm(self):
breakpoint() # Drops into pdb by default
return super().action_confirm()
# Method 2: ipdb — Better REPL with tab completion and colors
# pip install ipdb
import ipdb
class SaleOrder(models.Model):
_inherit = 'sale.order'
@api.depends('order_line.price_subtotal')
def _compute_amount_total(self):
for order in self:
ipdb.set_trace() # Interactive shell here
order.amount_total = sum(
line.price_subtotal for line in order.order_line
)
# Method 3: pudb — Full-screen TUI debugger
# pip install pudb
import pudb
class StockPicking(models.Model):
_inherit = 'stock.picking'
def button_validate(self):
pudb.set_trace() # Full curses-based UI
return super().button_validate()To use any of these, run Odoo in single-worker mode from your terminal:
# Single-worker mode (required for pdb/ipdb/pudb)
./odoo-bin -c odoo.conf --workers=0 --dev=reload,qweb,xml
# With ipdb as default breakpoint handler
PYTHONBREAKPOINT=ipdb.set_trace \
./odoo-bin -c odoo.conf --workers=0
# With pudb as default breakpoint handler
PYTHONBREAKPOINT=pudb.set_trace \
./odoo-bin -c odoo.conf --workers=0
# Debugging a specific test directly
./odoo-bin -c odoo.conf --workers=0 \
-d test_db --test-enable \
-u my_module --test-tags /my_module \
--stop-after-init Once inside the debugger: n (next line), s (step into), c (continue), p variable (print), pp self.env['sale.order'].search([]) (pretty-print ORM queries), self.env.cr.execute("SELECT ...") (run raw SQL). The ORM is fully available inside the debugger — you can call self.env.ref('base.main_company').name to inspect any record interactively.
Configuring Structured Logging in Odoo 19: Log Levels, _logger Patterns, and SQL Tracing
Interactive debugging stops being useful when the bug only reproduces under load, happens intermittently, or occurs in a cron job at 3 AM. For these cases, you need structured logging. Odoo 19 uses Python's standard logging module, but configures it through odoo.conf with its own hierarchy.
[options]
; ── Global log level ──────────────────────────
; Options: debug, debug_sql, debug_rpc, debug_rpc_answer,
; info, warn, error, critical
log_level = warn
; ── Per-module log levels (comma-separated) ───
; Format: module:LEVEL
log_handler = :WARNING,odoo.addons.my_module:DEBUG,odoo.sql_db:WARNING,werkzeug:WARNING
; ── SQL query logging ─────────────────────────
; Set log_level = debug_sql to log ALL queries (massive output)
; Better: use log_handler to target specific modules
; log_handler = odoo.sql_db:DEBUG
; ── Log output ────────────────────────────────
logfile = /var/log/odoo/odoo-server.log
log_db = False
log_db_level = warning
syslog = False
; ── Log rotation ──────────────────────────────
logrotate = True In your custom module code, always use the _logger pattern — never print(). The _logger instance respects the log level configuration, includes the module name in output, and works correctly across multi-worker processes:
import logging
import time
_logger = logging.getLogger(__name__)
class SaleOrder(models.Model):
_inherit = 'sale.order'
def action_confirm(self):
_logger.info(
"Confirming order %s (partner=%s, lines=%d, total=%s)",
self.name,
self.partner_id.display_name,
len(self.order_line),
self.amount_total,
)
start = time.perf_counter()
result = super().action_confirm()
elapsed = time.perf_counter() - start
if elapsed > 2.0:
_logger.warning(
"Slow confirmation: order %s took %.2fs "
"(lines=%d, total=%s)",
self.name, elapsed,
len(self.order_line), self.amount_total,
)
return result
@api.model
def _cron_process_pending_orders(self):
orders = self.search([('state', '=', 'draft')])
_logger.info(
"Cron: processing %d pending orders", len(orders)
)
for order in orders:
try:
order.action_confirm()
except Exception:
_logger.exception(
"Failed to confirm order %s", order.name
)
# _logger.exception() automatically includes
# the full traceback — never use traceback.print_exc() Write _logger.info("Order %s total: %s", order.name, order.amount_total), not _logger.info(f"Order {{order.name}} total: {{order.amount_total}}"). The %s format is lazy-evaluated — if the log level is above INFO, the string formatting never executes. With f-strings, the string is always constructed, even when the log message is discarded. On a high-traffic instance processing 1,000 orders/minute, this saves measurable CPU.
SQL Profiling and Slow Query Analysis with EXPLAIN ANALYZE in Odoo 19
The majority of Odoo performance problems are database problems. A computed field that runs a search_count() inside a loop, a missing index on a custom field used in a domain filter, or an ORM read() that triggers N+1 queries — these all manifest as "Odoo is slow" but the root cause is always the SQL layer.
Step 1: Enable PostgreSQL slow query logging to find which queries are actually slow:
-- In postgresql.conf: log queries slower than 200ms
-- shared_preload_libraries = 'pg_stat_statements'
-- log_min_duration_statement = 200
-- Find the top 10 slowest queries in Odoo
SELECT
round(total_exec_time::numeric, 2) AS total_ms,
calls,
round(mean_exec_time::numeric, 2) AS avg_ms,
round(max_exec_time::numeric, 2) AS max_ms,
left(query, 120) AS query_preview
FROM pg_stat_statements
WHERE dbname = 'production'
ORDER BY mean_exec_time DESC
LIMIT 10;
-- EXPLAIN ANALYZE a suspicious query
-- (never run on production without BUFFERS for I/O insight)
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT so.id, so.name, so.amount_total,
rp.name AS partner_name
FROM sale_order so
JOIN res_partner rp ON rp.id = so.partner_id
WHERE so.state = 'sale'
AND so.date_order >= '2025-01-01'
ORDER BY so.date_order DESC
LIMIT 100;
-- Check for missing indexes on custom fields
-- If Seq Scan appears on a large table, you need an index
SELECT
schemaname, relname AS table_name,
seq_scan, seq_tup_read,
idx_scan, idx_tup_fetch,
n_live_tup AS row_count
FROM pg_stat_user_tables
WHERE seq_scan > 100 AND n_live_tup > 10000
ORDER BY seq_tup_read DESC
LIMIT 20;Step 2: Use Odoo's built-in ORM profiling to trace which Python code generates the slow queries. Odoo 19 ships with a profiler accessible at /web/debug/profile in debug mode, but for custom modules, manual instrumentation gives better control:
| EXPLAIN Output | What It Means | Fix |
|---|---|---|
Seq Scan on sale_order | Full table scan — no index matches the WHERE clause | Add an index on the filtered column(s) |
Nested Loop (actual rows=500000) | Cartesian explosion from a bad JOIN or missing condition | Review JOIN conditions, add missing WHERE clause |
Sort (external merge) | Sort spilled to disk — work_mem too low or result set too large | Increase work_mem or add a covering index with the sort column |
Buffers: shared read=45000 | 45,000 pages read from disk (not cache) — cold cache or table too large for memory | Increase shared_buffers, or partition the table |
Odoo 19's ORM batches read() calls into prefetch groups of 1,000 records. But if you access a relational field inside a for loop after breaking the prefetch batch (e.g., by calling sudo() or with_context() mid-loop), each access triggers a separate SQL query. Use self.env.prefetch to inspect the prefetch set, and call mapped('field_name') before the loop to force a single batch read.
CPU and Memory Profiling: cProfile, py-spy, and tracemalloc for Odoo 19
When the problem isn't SQL but Python execution time — a complex _compute method, a heavy onchange, or a report that processes 10,000 lines — you need a CPU profiler. And when your Odoo workers slowly consume more memory until the OOM killer terminates them, you need a memory profiler.
CPU Profiling with cProfile and py-spy
cProfile is built into Python and gives function-level timing. It adds ~10% overhead, so it's safe for staging but not production. py-spy is a sampling profiler that attaches to a running process with zero overhead — safe for production.
# ── cProfile: Profile an entire Odoo module update ──
python -m cProfile -o /tmp/odoo_profile.prof \
./odoo-bin -c odoo.conf -d production \
-u my_module --stop-after-init
# Analyze the profile output
python -c "
import pstats
p = pstats.Stats('/tmp/odoo_profile.prof')
p.sort_stats('cumulative')
p.print_stats(30) # Top 30 functions by cumulative time
"
# ── py-spy: Attach to a running Odoo worker (production-safe) ──
# Find the Odoo worker PID
ps aux | grep odoo-bin | grep -v grep
# Record a flame graph (30 seconds of sampling)
sudo py-spy record -o /tmp/odoo_flame.svg \
--pid 12345 --duration 30
# Live top-like view of function calls
sudo py-spy top --pid 12345
# ── Memory profiling with tracemalloc ──
# Add to your module's __init__.py for temporary debugging:
# import tracemalloc
# tracemalloc.start(25) # 25 frames deep
# Then in the method you suspect:
# snapshot = tracemalloc.take_snapshot()
# top_stats = snapshot.statistics('lineno')
# for stat in top_stats[:20]:
# _logger.warning("Memory: %s", stat)Interpreting Flame Graphs
The flame graph from py-spy shows you where Odoo spends its time. The x-axis is the percentage of CPU time, not chronological time. Wide bars are expensive functions. Look for:
- Wide bars in
odoo/models.py— ORM overhead. Usually means too manysearch()orread()calls. Batch your operations. - Wide bars in
psycopg2— Time waiting for PostgreSQL. Profile the SQL layer (Step 03) to find the slow queries. - Wide bars in your custom module — Your code is the bottleneck. Look for loops that call ORM methods per-record instead of per-recordset.
- Wide bars in
werkzeugorjson— Serialization overhead. Usually means the response payload is too large (e.g., returning 10,000 records to the web client).
Odoo workers are supposed to be recycled when they hit limit_memory_hard. But if a worker leaks 50MB per request, it gets killed and restarted constantly — each restart takes 5-10 seconds of downtime for that worker. Use tracemalloc snapshots before and after a suspicious operation, then compare with snapshot2.compare_to(snapshot1, 'lineno') to find the exact line allocating unreleased memory. Common culprit: caching large recordsets in class-level attributes instead of per-request context.
Debugging the OWL 3 Frontend: Chrome DevTools, Component Inspector, and RPC Tracing
Odoo 19's web client is built entirely on OWL 3 (Odoo Web Library), a reactive component framework similar to Vue. When a form view is slow, a list view shows stale data, or a custom OWL component doesn't re-render, the backend profiling tools won't help — you need browser-side debugging.
Enable Odoo debug mode by appending ?debug=assets to the URL. This loads unminified JavaScript and CSS, making DevTools stack traces readable. Then use these techniques:
- Chrome DevTools Performance tab — Record a session while performing the slow action. Look for long tasks (>50ms) in the flame chart. OWL re-renders appear as
Component.render()calls — if a single render takes >16ms, it causes visible jank. - Network tab > XHR filter — Every Odoo action triggers JSON-RPC calls to
/web/dataset/call_kw. Sort by duration to find slow backend calls. Click a request to see the exact model, method, and arguments — this tells you which Python method to profile on the backend. - OWL DevTools extension — Odoo's OWL framework has a browser extension (available in debug mode) that shows the component tree, props, state, and re-render triggers. Use it to find components that re-render unnecessarily — a common cause of UI sluggishness on large form views.
- Console debugging — In the browser console, access the OWL environment:
odoo.__DEBUG__.services["web.env"]gives you access to the service registry. Useodoo.__DEBUG__.services["action"].doAction({{...}})to trigger actions programmatically for testing.
| Symptom | Tool | What to Look For |
|---|---|---|
| Form view takes 5s to load | Network tab | A single call_kw with high response time — profile that method server-side |
| List view shows stale data | OWL DevTools | Component state not updating — check if onWillUpdateProps is implemented |
| Typing in a field is laggy | Performance tab | Excessive re-renders triggered by onchange — debounce or optimize the onchange method |
| Custom OWL component not rendering | Console + OWL DevTools | Check for JS errors in console, verify the component is registered in the correct registry |
Tracing JSON-RPC Calls Between Frontend and Backend
Every button click, field change, and view switch in Odoo's web client generates one or more JSON-RPC calls. When a user reports "the Confirm button is slow," the first step is identifying which RPC call is the bottleneck. Open Chrome DevTools Network tab, filter by call_kw, reproduce the action, and look at the timing waterfall.
Each RPC request payload contains: the model (e.g., sale.order), the method (e.g., action_confirm), the args (record IDs), and the kwargs (context). This tells you exactly which Python method to profile with py-spy or cProfile. Copy the model and method name, grep your codebase for overrides, and you've narrowed a "slow button" report to a specific function in under 60 seconds.
For intermittent frontend issues, use the Console tab's Preserve log checkbox to retain logs across page navigations. OWL components log lifecycle warnings when a setup() hook throws, when a reactive property is mutated outside the component scope, or when a template references an undefined variable. These warnings are silent in production but visible in debug mode — they often point to the exact line causing stale renders or broken state.
Odoo 19 batches multiple RPC calls into a single HTTP request when they occur within the same microtask. If you see a single /web/dataset/call_kw request with a response time of 3 seconds, it may contain multiple method calls. Inspect the request payload — if it's an array of calls, each one needs to be profiled independently. A common pattern: a form view's onchange triggers three separate ORM calls that are batched into one RPC, and only one of them is slow.
The 5 Most Common Odoo 19 Performance Bottlenecks (and How to Find Them)
After profiling dozens of Odoo 19 deployments, these are the patterns we see repeatedly. Each one has a specific debugging approach:
| Bottleneck | Symptoms | Detection Tool | Typical Fix |
|---|---|---|---|
| N+1 ORM queries | List views slow with many records, linear time growth | log_handler = odoo.sql_db:DEBUG — count queries per request | Use mapped() or read() with field list before the loop |
| Unindexed domain filters | Slow search() on tables with >100k rows | EXPLAIN ANALYZE shows Seq Scan | Add index=True to the field definition or a manual B-tree index |
| Compute fields triggering in batch | Saving a single record takes 5+ seconds | py-spy flame graph shows wide bar in _compute_* | Add proper @api.depends to limit recomputation scope |
| Large recordset serialization | High memory usage, slow API responses > 5MB | Network tab shows large response payload | Paginate results, use read() with specific fields instead of full records |
| Lock contention on stock moves | Warehouse operations timeout, deadlock detected in logs | SELECT * FROM pg_stat_activity WHERE wait_event_type = 'Lock' | Reduce transaction scope, process moves in smaller batches |
The debugging workflow for any performance issue follows the same pattern: (1) reproduce the issue while recording (py-spy for backend, Chrome Performance for frontend), (2) identify the slow function from the flame graph, (3) check if it's Python-bound (optimize the code) or SQL-bound (run EXPLAIN ANALYZE), (4) fix and verify with before/after timing. Document the baseline and improvement — this builds a performance regression baseline for future CI checks.
For production monitoring, set up a simple threshold alert in your logging configuration. If any request takes longer than 5 seconds, log it as a warning with the full request context. This creates a continuous stream of performance data that helps you catch regressions before users report them:
import logging
import time
from odoo import http
_logger = logging.getLogger(__name__)
class PerformanceMonitor(http.Controller):
"""
Mixin approach: override ir.http to log slow requests.
Add this to your custom module's models/ir_http.py
"""
class IrHttp(models.AbstractModel):
_inherit = 'ir.http'
@classmethod
def _dispatch(cls, endpoint):
start = time.perf_counter()
result = super()._dispatch(endpoint)
elapsed = time.perf_counter() - start
if elapsed > 5.0:
_logger.warning(
"SLOW REQUEST: %.2fs | %s %s | uid=%s",
elapsed,
request.httprequest.method,
request.httprequest.path,
request.uid,
)
return result4 Debugging Mistakes That Waste Hours and Mask the Real Problem
Using print() Instead of _logger in Multi-Worker Mode
In multi-worker mode (workers > 0), each Odoo worker is a separate forked process with its own stdout. A print() statement in your model code writes to the stdout of whichever worker happens to handle that request — and if you're watching the main process log, you'll never see it. Worse, print() output disappears entirely when Odoo runs under systemd with StandardOutput=journal unless you explicitly configure journal forwarding. We've seen developers spend hours "debugging" code that was working correctly — they just couldn't see the output.
Always use _logger. It writes to the configured log file (visible to all workers), includes timestamps, log levels, and the module name. For quick debugging, _logger.warning() is the equivalent of print() — it always shows regardless of log level config.
Profiling with debug_sql on Production and Crashing the Server
Setting log_level = debug_sql logs every single SQL query Odoo executes. On a production instance handling 100 requests/second, that's 5,000-10,000 log lines per second. The log file grows by gigabytes per hour, fills the disk, and the server stops responding. We've been called in to recover servers where someone set debug_sql to "quickly check something" and forgot to revert it.
Never use log_level = debug_sql globally. Instead, use log_handler = odoo.addons.my_module:DEBUG to target specific modules. For SQL analysis, use pg_stat_statements on the PostgreSQL side — it aggregates query statistics without logging every execution.
Debugging Performance with Demo Data Instead of Production Volume
PostgreSQL's query planner makes different decisions based on table size. A query that uses an index scan on a table with 100 rows may switch to a sequential scan on a table with 1,000,000 rows — because at that volume, a full scan is actually faster than 1,000,000 random index lookups. Developers profile on their dev machine with 50 sale orders, find no issues, and declare the module "optimized." In production with 500,000 sale orders, the same query takes 30 seconds because the query plan is completely different.
Always profile against a staging database with production-volume data. At minimum, generate realistic test data: self.env['sale.order'].create([vals] * 100000) in a test method. Compare EXPLAIN ANALYZE output between dev and staging to catch query plan divergences.
Leaving breakpoint() Calls in Committed Code
A breakpoint() or import ipdb; ipdb.set_trace() left in production code will freeze the Odoo worker that hits it. The worker stops processing requests and waits for debugger input that never comes. With 4 workers, one stray breakpoint means 25% of your capacity is gone. If the breakpoint is in a cron method, the cron worker freezes permanently and no scheduled actions run.
Add a pre-commit hook that rejects files containing breakpoint(), pdb.set_trace(), ipdb.set_trace(), or pudb.set_trace(). In CI, run grep -rn "set_trace\|breakpoint()" --include="*.py" as a lint step. This is a one-line addition to your CI pipeline that prevents a category of production outages.
What Proper Debugging Practices Save Your Business
Debugging tools are free. The ROI comes from the time they save and the outages they prevent:
Structured logging with per-module levels and SQL tracing pinpoints the root cause in minutes instead of hours of guesswork with print() statements.
py-spy flame graphs and EXPLAIN ANALYZE show the exact bottleneck. No more "let's try adding an index everywhere and see what sticks."
pdb, cProfile, pg_stat_statements, Chrome DevTools, py-spy — all free and open source. The investment is learning, not licensing.
The hidden ROI is developer retention. Developers who have proper debugging tools and know how to use them don't burn out from two-day debugging marathons. They diagnose issues systematically, fix them with confidence, and move on. That's the difference between a team that ships weekly and one that's permanently stuck fighting fires.
Optimization Metadata
Complete guide to debugging Odoo 19: pdb/ipdb breakpoints, _logger configuration, SQL profiling with EXPLAIN ANALYZE, py-spy flame graphs, and OWL frontend debugging.
1. "Interactive Debugging with PDB, IPDB, and Pudb in Odoo 19"
2. "SQL Profiling and Slow Query Analysis with EXPLAIN ANALYZE in Odoo 19"
3. "CPU and Memory Profiling: cProfile, py-spy, and tracemalloc for Odoo 19"
4. "4 Debugging Mistakes That Waste Hours and Mask the Real Problem"