If your queries hang or return a timeout error, the first thing to check is whether you're hitting memory limits. ClickHouse is aggressive about killing queries that exceed max_memory_usage. Run SELECT query_id, memory_usage, query FROM system.processes to see what's eating resources. Often the fix is adding LIMIT, rewriting a JOIN to use a subquery, or bumping max_memory_usage in your session settings — but do that consciously, not as a default.
The server listens on port 8123 for HTTP and 9000 for the native protocol. If you get 'Connection refused', check that the service is actually running: systemctl status clickhouse-server. Also verify that listen_host in config.xml isn't bound to 127.0.0.1 only — remote connections will silently fail if it is.
If a dictionary source is unavailable at startup, the whole dictionary silently stays empty — no crash, just stale or missing data. Enable dictionaries_lazy_load in config and add proper source availability checks. You can reload manually with SYSTEM RELOAD DICTIONARY 'your_dict'.
Streaming millions of rows into a browser-based UI like Tabix or built-in Play UI will freeze the tab. Always use LIMIT during exploration. For exports, use the clickhouse-client with --query and pipe output to a file instead of loading it in the UI.
Unmerged parts and detached partitions are common culprits. Check SELECT name, rows, bytes_on_disk FROM system.parts WHERE active = 0 — inactive parts waiting for merge can pile up fast. Run OPTIMIZE TABLE ... FINAL with caution on large tables; on production it can spike IO.