Cluster Settings

Cluster settings apply to all nodes of a CockroachDB cluster and control, for example, whether or not to share diagnostic details with Cockroach Labs as well as advanced options for debugging and cluster tuning.

They can be updated anytime after a cluster has been started, but only by a member of the admin role, to which the root user belongs by default.

Note:

In contrast to cluster-wide settings, node-level settings apply to a single node. They are defined by flags passed to the cockroach start command when starting a node and cannot be changed without stopping and restarting the node. For more details, see Start a Node.

Settings

Warning:

These cluster settings have a broad impact on CockroachDB internals and affect all applications, workloads, and users running on a CockroachDB cluster. For some settings, a session setting could be a more appropriate scope.

Note:

New in v22.2: Use ALTER ROLE ALL SET {sessionvar} = {val} instead of the sql.defaults.* cluster settings. This allows you to set a default value for all users for any session variable that applies during login, making the sql.defaults.* cluster settings redundant.

SettingTypeDefaultDescription
admission.disk_bandwidth_tokens.elastic.enabled
booleantruewhen true, and provisioned bandwidth for the disk corresponding to a store is configured, tokens for elastic work will be limited if disk bandwidth becomes a bottleneck
admission.epoch_lifo.enabled
booleanfalsewhen true, epoch-LIFO behavior is enabled when there is significant delay in admission
admission.epoch_lifo.epoch_closing_delta_duration
duration5msthe delta duration before closing an epoch, for epoch-LIFO admission control ordering
admission.epoch_lifo.epoch_duration
duration100msthe duration of an epoch, for epoch-LIFO admission control ordering
admission.epoch_lifo.queue_delay_threshold_to_switch_to_lifo
duration105msthe queue delay encountered by a (tenant,priority) for switching to epoch-LIFO ordering
admission.kv.enabled
booleantruewhen true, work performed by the KV layer is subject to admission control
admission.kv.stores.tenant_weights.enabled
booleanfalsewhen true, tenant weights are enabled for KV-stores admission control
admission.kv.tenant_weights.enabled
booleanfalsewhen true, tenant weights are enabled for KV admission control
admission.sql_kv_response.enabled
booleantruewhen true, work performed by the SQL layer when receiving a KV response is subject to admission control
admission.sql_sql_response.enabled
booleantruewhen true, work performed by the SQL layer when receiving a DistSQL response is subject to admission control
bulkio.backup.deprecated_full_backup_with_subdir.enabled
booleanfalsewhen true, a backup command with a user specified subdirectory will create a full backup at the subdirectory if no backup already exists at that subdirectory.
bulkio.backup.file_size
byte size128 MiBtarget size for individual data files produced during BACKUP
bulkio.backup.read_timeout
duration5m0samount of time after which a read attempt is considered timed out, which causes the backup to fail
bulkio.backup.read_with_priority_after
duration1m0samount of time since the read-as-of time above which a BACKUP should use priority when retrying reads
bulkio.stream_ingestion.minimum_flush_interval
duration5sthe minimum timestamp between flushes; flushes may still occur if internal buffers fill up
changefeed.backfill.scan_request_size
integer524288the maximum number of bytes returned by each scan request
changefeed.balance_range_distribution.enable
booleanfalseif enabled, the ranges are balanced equally among all nodes
changefeed.batch_reduction_retry_enabled
booleanfalseif true, kafka changefeeds upon erroring on an oversized batch will attempt to resend the messages with progressively lower batch sizes
changefeed.event_consumer_worker_queue_size
integer16if changefeed.event_consumer_workers is enabled, this setting sets the maxmimum number of events which a worker can buffer
changefeed.event_consumer_workers
integer0the number of workers to use when processing events: <0 disables, 0 assigns a reasonable default, >0 assigns the setting value. for experimental/core changefeeds and changefeeds using parquet format, this is disabled
changefeed.fast_gzip.enabled
booleantrueuse fast gzip implementation
changefeed.lagging_ranges_polling_interval
duration1m0sthe polling rate at which lagging ranges are checked and corresponding metrics are updated. will be removed in v23.2 onwards
changefeed.lagging_ranges_threshold
duration3m0sspecifies the duration by which a range must be lagging behind the present to be considered as 'lagging' behind in metrics. will be removed in v23.2 in favor of a changefeed option
changefeed.node_throttle_config
stringspecifies node level throttling configuration for all changefeeeds
changefeed.schema_feed.read_with_priority_after
duration1m0sretry with high priority if we were not able to read descriptors for too long; 0 disables
cloudstorage.azure.concurrent_upload_buffers
integer1controls the number of concurrent buffers that will be used by the Azure client when uploading chunks.Each buffer can buffer up to cloudstorage.write_chunk.size of memory during an upload
cloudstorage.http.custom_ca
stringcustom root CA (appended to system's default CAs) for verifying certificates when interacting with HTTPS storage
cloudstorage.timeout
duration10m0sthe timeout for import/export storage operations
cluster.organization
stringorganization name
cluster.preserve_downgrade_option
stringdisable (automatic or manual) cluster version upgrade from the specified version until reset
diagnostics.active_query_dumps.enabled
booleantrueexperimental: enable dumping of anonymized active queries to disk when node is under memory pressure
diagnostics.forced_sql_stat_reset.interval
duration2h0m0sinterval after which the reported SQL Stats are reset even if not collected by telemetry reporter. It has a max value of 24H.
diagnostics.reporting.enabled
booleantrueenable reporting diagnostic metrics to cockroach labs
diagnostics.reporting.interval
duration1h0m0sinterval at which diagnostics data should be reported
enterprise.license
stringthe encoded cluster license
external.graphite.endpoint
stringif nonempty, push server metrics to the Graphite or Carbon server at the specified host:port
external.graphite.interval
duration10sthe interval at which metrics are pushed to Graphite (if enabled)
feature.backup.enabled
booleantrueset to true to enable backups, false to disable; default is true
feature.changefeed.enabled
booleantrueset to true to enable changefeeds, false to disable; default is true
feature.export.enabled
booleantrueset to true to enable exports, false to disable; default is true
feature.import.enabled
booleantrueset to true to enable imports, false to disable; default is true
feature.restore.enabled
booleantrueset to true to enable restore, false to disable; default is true
feature.schema_change.enabled
booleantrueset to true to enable schema changes, false to disable; default is true
feature.stats.enabled
booleantrueset to true to enable CREATE STATISTICS/ANALYZE, false to disable; default is true
jobs.retention_time
duration336h0m0sthe amount of time to retain records for completed jobs before
kv.allocator.lease_rebalance_threshold
float0.05minimum fraction away from the mean a store's lease count can be before it is considered for lease-transfers
kv.allocator.load_based_lease_rebalancing.enabled
booleantrueset to enable rebalancing of range leases based on load and latency
kv.allocator.load_based_rebalancing
enumerationleases and replicaswhether to rebalance based on the distribution of QPS across stores [off = 0, leases = 1, leases and replicas = 2]
kv.allocator.load_based_rebalancing_interval
duration1m0sthe rough interval at which each store will check for load-based lease / replica rebalancing opportunities
kv.allocator.qps_rebalance_threshold
float0.1minimum fraction away from the mean a store's QPS (such as queries per second) can be before it is considered overfull or underfull
kv.allocator.range_rebalance_threshold
float0.05minimum fraction away from the mean a store's range count can be before it is considered overfull or underfull
kv.bulk_io_write.max_rate
byte size1.0 TiBthe rate limit (bytes/sec) to use for writes to disk on behalf of bulk io ops
kv.bulk_sst.max_allowed_overage
byte size64 MiBif positive, allowed size in excess of target size for SSTs from export requests; export requests (i.e. BACKUP) may buffer up to the sum of kv.bulk_sst.target_size and kv.bulk_sst.max_allowed_overage in memory
kv.bulk_sst.target_size
byte size16 MiBtarget size for SSTs emitted from export requests; export requests (i.e. BACKUP) may buffer up to the sum of kv.bulk_sst.target_size and kv.bulk_sst.max_allowed_overage in memory
kv.closed_timestamp.follower_reads_enabled
booleantrueallow (all) replicas to serve consistent historical reads based on closed timestamp information
kv.log_range_and_node_events.enabled
booleantrueset to true to transactionally log range events (e.g., split, merge, add/remove voter/non-voter) into system.rangelogand node join and restart events into system.eventolog
kv.protectedts.reconciliation.interval
duration5m0sthe frequency for reconciling jobs with protected timestamp records
kv.range_split.by_load_enabled
booleantrueallow automatic splits of ranges based on where load is concentrated
kv.range_split.load_qps_threshold
integer2500the QPS over which, the range becomes a candidate for load based splitting
kv.rangefeed.enabled
booleanfalseif set, rangefeed registration is enabled
kv.rangefeed.range_stuck_threshold
duration1m0srestart rangefeeds if they don't emit anything for the specified threshold; 0 disables (kv.closed_timestamp.side_transport_interval takes precedence)
kv.replica_circuit_breaker.slow_replication_threshold
duration1m0sduration after which slow proposals trip the per-Replica circuit breaker (zero duration disables breakers)
kv.replica_stats.addsst_request_size_factor
integer50000the divisor that is applied to addsstable request sizes, then recorded in a leaseholders QPS; 0 means all requests are treated as cost 1
kv.replication_reports.interval
duration1m0sthe frequency for generating the replication_constraint_stats, replication_stats_report and replication_critical_localities reports (set to 0 to disable)
kv.snapshot_rebalance.max_rate
byte size32 MiBthe rate limit (bytes/sec) to use for rebalance and upreplication snapshots
kv.snapshot_recovery.max_rate
byte size32 MiBthe rate limit (bytes/sec) to use for recovery snapshots
kv.transaction.max_intents_bytes
integer4194304maximum number of bytes used to track locks in transactions
kv.transaction.max_refresh_spans_bytes
integer4194304maximum number of bytes used to track refresh spans in serializable transactions
kv.transaction.reject_over_max_intents_budget.enabled
booleanfalseif set, transactions that exceed their lock tracking budget (kv.transaction.max_intents_bytes) are rejected instead of having their lock spans imprecisely compressed
kvadmission.store.provisioned_bandwidth
byte size0 Bif set to a non-zero value, this is used as the provisioned bandwidth (in bytes/s), for each store. It can be over-ridden on a per-store basis using the --store flag
schedules.backup.gc_protection.enabled
booleantrueenable chaining of GC protection across backups run as part of a schedule
security.ocsp.mode
enumerationoffuse OCSP to check whether TLS certificates are revoked. If the OCSP server is unreachable, in strict mode all certificates will be rejected and in lax mode all certificates will be accepted. [off = 0, lax = 1, strict = 2]
security.ocsp.timeout
duration3stimeout before considering the OCSP server unreachable
server.auth_log.sql_connections.enabled
booleanfalseif set, log SQL client connect and disconnect events (note: may hinder performance on loaded nodes)
server.auth_log.sql_sessions.enabled
booleanfalseif set, log SQL session login/disconnection events (note: may hinder performance on loaded nodes)
server.authentication_cache.enabled
booleantrueenables a cache used during authentication to avoid lookups to system tables when retrieving per-user authentication-related information
server.child_metrics.enabled
booleanfalseenables the exporting of child metrics, additional prometheus time series with extra labels
server.client_cert_expiration_cache.capacity
integer1000the maximum number of client cert expirations stored
server.clock.forward_jump_check_enabled
booleanfalseif enabled, forward clock jumps > max_offset/2 will cause a panic
server.clock.persist_upper_bound_interval
duration0sthe interval between persisting the wall time upper bound of the clock. The clock does not generate a wall time greater than the persisted timestamp and will panic if it sees a wall time greater than this value. When cockroach starts, it waits for the wall time to catch-up till this persisted timestamp. This guarantees monotonic wall time across server restarts. Not setting this or setting a value of 0 disables this feature.
server.consistency_check.max_rate
byte size8.0 MiBthe rate limit (bytes/sec) to use for consistency checks; used in conjunction with server.consistency_check.interval to control the frequency of consistency checks. Note that setting this too high can negatively impact performance.
server.eventlog.enabled
booleantrueif set, logged notable events are also stored in the table system.eventlog
server.eventlog.ttl
duration2160h0m0sif nonzero, entries in system.eventlog older than this duration are deleted every 10m0s. Should not be lowered below 24 hours.
server.host_based_authentication.configuration
stringhost-based authentication configuration to use during connection authentication
server.hsts.enabled
booleanfalseif true, HSTS headers will be sent along with all HTTP requests. The headers will contain a max-age setting of one year. Browsers honoring the header will always use HTTPS to access the DB Console. Ensure that TLS is correctly configured prior to enabling.
server.http.base_path
string/path to redirect the user to upon succcessful login
server.identity_map.configuration
stringsystem-identity to database-username mappings
server.max_connections_per_gateway
integer-1the maximum number of non-superuser SQL connections per gateway allowed at a given time (note: this will only limit future connection attempts and will not affect already established connections). Negative values result in unlimited number of connections. Superusers are not affected by this limit.
server.oidc_authentication.autologin
booleanfalseif true, logged-out visitors to the DB Console will be automatically redirected to the OIDC login endpoint
server.oidc_authentication.button_text
stringLogin with your OIDC providertext to show on button on DB Console login page to login with your OIDC provider (only shown if OIDC is enabled)
server.oidc_authentication.claim_json_key
stringsets JSON key of principal to extract from payload after OIDC authentication completes (usually email or sid)
server.oidc_authentication.client_id
stringsets OIDC client id
server.oidc_authentication.client_secret
stringsets OIDC client secret
server.oidc_authentication.enabled
booleanfalseenables or disabled OIDC login for the DB Console
server.oidc_authentication.principal_regex
string(.+)regular expression to apply to extracted principal (see claim_json_key setting) to translate to SQL user (golang regex format, must include 1 grouping to extract)
server.oidc_authentication.provider_url
stringsets OIDC provider URL ({provider_url}/.well-known/openid-configuration must resolve)
server.oidc_authentication.redirect_url
stringhttps://localhost:8080/oidc/v1/callbacksets OIDC redirect URL via a URL string or a JSON string containing a required `redirect_urls` key with an object that maps from region keys to URL strings (URLs should point to your load balancer and must route to the path /oidc/v1/callback)
server.oidc_authentication.scopes
stringopenidsets OIDC scopes to include with authentication request (space delimited list of strings, required to start with `openid`)
server.rangelog.ttl
duration720h0m0sif nonzero, range log entries older than this duration are deleted every 10m0s. Should not be lowered below 24 hours.
server.secondary_tenants.redact_trace.enabled
booleantruecontrols if server side traces are redacted for tenant operations
server.shutdown.connection_wait
duration0sthe maximum amount of time a server waits for all SQL connections to be closed before proceeding with a drain. (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting)
server.shutdown.drain_wait
duration0sthe amount of time a server waits in an unready state before proceeding with a drain (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting. --drain-wait is to specify the duration of the whole draining process, while server.shutdown.drain_wait is to set the wait time for health probes to notice that the node is not ready.)
server.shutdown.lease_transfer_wait
duration5sthe timeout for a single iteration of the range lease transfer phase of draining (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting)
server.shutdown.query_wait
duration10sthe timeout for waiting for active queries to finish during a drain (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting)
server.time_until_store_dead
duration5m0sthe time after which if there is no new gossiped information about a store, it is considered dead
server.user_login.cert_password_method.auto_scram_promotion.enabled
booleantruewhether to automatically promote cert-password authentication to use SCRAM
server.user_login.downgrade_scram_stored_passwords_to_bcrypt.enabled
booleantrueif server.user_login.password_encryption=crdb-bcrypt, this controls whether to automatically re-encode stored passwords using scram-sha-256 to crdb-bcrypt
server.user_login.min_password_length
integer1the minimum length accepted for passwords set in cleartext via SQL. Note that a value lower than 1 is ignored: passwords cannot be empty in any case.
server.user_login.password_encryption
enumerationscram-sha-256which hash method to use to encode cleartext passwords passed via ALTER/CREATE USER/ROLE WITH PASSWORD [crdb-bcrypt = 2, scram-sha-256 = 3]
server.user_login.password_hashes.default_cost.crdb_bcrypt
integer10the hashing cost to use when storing passwords supplied as cleartext by SQL clients with the hashing method crdb-bcrypt (allowed range: 4-31)
server.user_login.password_hashes.default_cost.scram_sha_256
integer10610the hashing cost to use when storing passwords supplied as cleartext by SQL clients with the hashing method scram-sha-256 (allowed range: 4096-240000000000)
server.user_login.rehash_scram_stored_passwords_on_cost_change.enabled
booleantrueif server.user_login.password_hashes.default_cost.scram_sha_256 differs from, the cost in a stored hash, this controls whether to automatically re-encode stored passwords using scram-sha-256 with the new default cost
server.user_login.timeout
duration10stimeout after which client authentication times out if some system range is unavailable (0 = no timeout)
server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabled
booleantrueif server.user_login.password_encryption=scram-sha-256, this controls whether to automatically re-encode stored passwords using crdb-bcrypt to scram-sha-256
server.web_session.auto_logout.timeout
duration168h0m0sthe duration that web sessions will survive before being periodically purged, since they were last used
server.web_session.purge.max_deletions_per_cycle
integer10the maximum number of old sessions to delete for each purge
server.web_session.purge.period
duration1h0m0sthe time until old sessions are deleted
server.web_session.purge.ttl
duration1h0m0sif nonzero, entries in system.web_sessions older than this duration are periodically purged
server.web_session_timeout
duration168h0m0sthe duration that a newly created web session will be valid
sql.auth.change_own_password.enabled
booleanfalsecontrols whether a user is allowed to change their own password, even if they have no other privileges
sql.auth.modify_cluster_setting_applies_to_all.enabled
booleantruea bool which indicates whether MODIFYCLUSTERSETTING is able to set all cluster settings or only settings with the sql.defaults prefix
sql.auth.resolve_membership_single_scan.enabled
booleantruedetermines whether to populate the role membership cache with a single scan
sql.closed_session_cache.capacity
integer1000the maximum number of sessions in the cache
sql.closed_session_cache.time_to_live
integer3600the maximum time to live, in seconds
sql.contention.event_store.capacity
byte size64 MiBthe in-memory storage capacity per-node of contention event store
sql.contention.event_store.duration_threshold
duration0sminimum contention duration to cause the contention events to be collected into crdb_internal.transaction_contention_events
sql.contention.txn_id_cache.max_size
byte size64 MiBthe maximum byte size TxnID cache will use (set to 0 to disable)
sql.cross_db_fks.enabled
booleanfalseif true, creating foreign key references across databases is allowed
sql.cross_db_sequence_owners.enabled
booleanfalseif true, creating sequences owned by tables from other databases is allowed
sql.cross_db_sequence_references.enabled
booleanfalseif true, sequences referenced by tables from other databases are allowed
sql.cross_db_views.enabled
booleanfalseif true, creating views that refer to other databases is allowed
sql.defaults.cost_scans_with_default_col_size.enabled
booleanfalsesetting to true uses the same size for all columns to compute scan cost
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.datestyle
enumerationiso, mdydefault value for DateStyle session setting [iso, mdy = 0, iso, dmy = 1, iso, ymd = 2]
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.default_hash_sharded_index_bucket_count
integer16used as bucket count if bucket count is not specified in hash sharded index definition
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.default_int_size
integer8the size, in bytes, of an INT type
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.disallow_full_table_scans.enabled
booleanfalsesetting to true rejects queries that have planned a full table scan
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.distsql
enumerationautodefault distributed SQL execution mode [off = 0, auto = 1, on = 2, always = 3]
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.experimental_alter_column_type.enabled
booleanfalsedefault value for experimental_alter_column_type session setting; enables the use of ALTER COLUMN TYPE for general conversions
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.experimental_distsql_planning
enumerationoffdefault experimental_distsql_planning mode; enables experimental opt-driven DistSQL planning [off = 0, on = 1]
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.experimental_enable_unique_without_index_constraints.enabled
booleanfalsedefault value for experimental_enable_unique_without_index_constraints session setting;disables unique without index constraints by default
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.experimental_implicit_column_partitioning.enabled
booleanfalsedefault value for experimental_enable_temp_tables; allows for the use of implicit column partitioning
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.experimental_stream_replication.enabled
booleanfalsedefault value for experimental_stream_replication session setting;enables the ability to setup a replication stream
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.experimental_temporary_tables.enabled
booleanfalsedefault value for experimental_enable_temp_tables; allows for use of temporary tables by default
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.foreign_key_cascades_limit
integer10000default value for foreign_key_cascades_limit session setting; limits the number of cascading operations that run as part of a single query
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.idle_in_session_timeout
duration0sdefault value for the idle_in_session_timeout; default value for the idle_in_session_timeout session setting; controls the duration a session is permitted to idle before the session is terminated; if set to 0, there is no timeout
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.idle_in_transaction_session_timeout
duration0sdefault value for the idle_in_transaction_session_timeout; controls the duration a session is permitted to idle in a transaction before the session is terminated; if set to 0, there is no timeout
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.implicit_select_for_update.enabled
booleantruedefault value for enable_implicit_select_for_update session setting; enables FOR UPDATE locking during the row-fetch phase of mutation statements
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.insert_fast_path.enabled
booleantruedefault value for enable_insert_fast_path session setting; enables a specialized insert path
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.intervalstyle
enumerationpostgresdefault value for IntervalStyle session setting [postgres = 0, iso_8601 = 1, sql_standard = 2]
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.large_full_scan_rows
float1000default value for large_full_scan_rows session setting which determines the maximum table size allowed for a full scan when disallow_full_table_scans is set to true
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.locality_optimized_partitioned_index_scan.enabled
booleantruedefault value for locality_optimized_partitioned_index_scan session setting; enables searching for rows in the current region before searching remote regions
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.lock_timeout
duration0sdefault value for the lock_timeout; default value for the lock_timeout session setting; controls the duration a query is permitted to wait while attempting to acquire a lock on a key or while blocking on an existing lock in order to perform a non-locking read on a key; if set to 0, there is no timeout
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.on_update_rehome_row.enabled
booleantruedefault value for on_update_rehome_row; enables ON UPDATE rehome_row() expressions to trigger on updates
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.optimizer_use_histograms.enabled
booleantruedefault value for optimizer_use_histograms session setting; enables usage of histograms in the optimizer by default
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.optimizer_use_multicol_stats.enabled
booleantruedefault value for optimizer_use_multicol_stats session setting; enables usage of multi-column stats in the optimizer by default
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.override_alter_primary_region_in_super_region.enabled
booleanfalsedefault value for override_alter_primary_region_in_super_region; allows for altering the primary region even if the primary region is a member of a super region
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.override_multi_region_zone_config.enabled
booleanfalsedefault value for override_multi_region_zone_config; allows for overriding the zone configs of a multi-region table or database
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.prefer_lookup_joins_for_fks.enabled
booleanfalsedefault value for prefer_lookup_joins_for_fks session setting; causes foreign key operations to use lookup joins when possible
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.primary_region
stringif not empty, all databases created without a PRIMARY REGION will implicitly have the given PRIMARY REGION
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.reorder_joins_limit
integer8default number of joins to reorder
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.require_explicit_primary_keys.enabled
booleanfalsedefault value for requiring explicit primary keys in CREATE TABLE statements
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.results_buffer.size
byte size16 KiBdefault size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. This can be overridden on an individual connection with the 'results_buffer_size' parameter. Note that auto-retries generally only happen while no results have been delivered to the client, so reducing this size can increase the number of retriable errors a client receives. On the other hand, increasing the buffer size can increase the delay until the client receives the first result row. Updating the setting only affects new connections. Setting to 0 disables any buffering.
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.serial_normalization
enumerationrowiddefault handling of SERIAL in table definitions [rowid = 0, virtual_sequence = 1, sql_sequence = 2, sql_sequence_cached = 3, unordered_rowid = 4]
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.statement_timeout
duration0sdefault value for the statement_timeout; default value for the statement_timeout session setting; controls the duration a query is permitted to run before it is canceled; if set to 0, there is no timeout
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.stub_catalog_tables.enabled
booleantruedefault value for stub_catalog_tables session setting
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.super_regions.enabled
booleanfalsedefault value for enable_super_regions; allows for the usage of super regions
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.transaction_rows_read_err
integer0the limit for the number of rows read by a SQL transaction which - once exceeded - will fail the transaction (or will trigger a logging event to SQL_INTERNAL_PERF for internal transactions); use 0 to disable
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.transaction_rows_read_log
integer0the threshold for the number of rows read by a SQL transaction which - once exceeded - will trigger a logging event to SQL_PERF (or SQL_INTERNAL_PERF for internal transactions); use 0 to disable
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.transaction_rows_written_err
integer0the limit for the number of rows written by a SQL transaction which - once exceeded - will fail the transaction (or will trigger a logging event to SQL_INTERNAL_PERF for internal transactions); use 0 to disable
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.transaction_rows_written_log
integer0the threshold for the number of rows written by a SQL transaction which - once exceeded - will trigger a logging event to SQL_PERF (or SQL_INTERNAL_PERF for internal transactions); use 0 to disable
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.use_declarative_schema_changer
enumerationondefault value for use_declarative_schema_changer session setting;disables new schema changer by default [off = 0, on = 1, unsafe = 2, unsafe_always = 3]
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.vectorize
enumerationondefault vectorize mode [on = 0, on = 2, experimental_always = 3, off = 4]
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.defaults.zigzag_join.enabled
booleantruedefault value for enable_zigzag_join session setting; allows use of zig-zag join by default
This cluster setting is being kept to preserve backwards-compatibility.
This session variable default should now be configured using ALTER ROLE... SET
sql.distsql.max_running_flows
integer-128the value - when positive - used as is, or the value - when negative - multiplied by the number of CPUs on a node, to determine the maximum number of concurrent remote flows that can be run on the node
sql.distsql.temp_storage.workmem
byte size64 MiBmaximum amount of memory in bytes a processor can use before falling back to temp storage
sql.guardrails.max_row_size_err
byte size512 MiBmaximum size of row (or column family if multiple column families are in use) that SQL can write to the database, above which an error is returned; use 0 to disable
sql.guardrails.max_row_size_log
byte size64 MiBmaximum size of row (or column family if multiple column families are in use) that SQL can write to the database, above which an event is logged to SQL_PERF (or SQL_INTERNAL_PERF if the mutating statement was internal); use 0 to disable
sql.hash_sharded_range_pre_split.max
integer16max pre-split ranges to have when adding hash sharded index to an existing table
sql.insights.anomaly_detection.enabled
booleantrueenable per-fingerprint latency recording and anomaly detection
sql.insights.anomaly_detection.latency_threshold
duration50msstatements must surpass this threshold to trigger anomaly detection and identification
sql.insights.anomaly_detection.memory_limit
byte size1.0 MiBthe maximum amount of memory allowed for tracking statement latencies
sql.insights.execution_insights_capacity
integer1000the size of the per-node store of execution insights
sql.insights.high_retry_count.threshold
integer10the number of retries a slow statement must have undergone for its high retry count to be highlighted as a potential problem
sql.insights.latency_threshold
duration100msamount of time after which an executing statement is considered slow. Use 0 to disable.
sql.log.slow_query.experimental_full_table_scans.enabled
booleanfalsewhen set to true, statements that perform a full table/index scan will be logged to the slow query log even if they do not meet the latency threshold. Must have the slow query log enabled for this setting to have any effect.
sql.log.slow_query.internal_queries.enabled
booleanfalsewhen set to true, internal queries which exceed the slow query log threshold are logged to a separate log. Must have the slow query log enabled for this setting to have any effect.
sql.log.slow_query.latency_threshold
duration0swhen set to non-zero, log statements whose service latency exceeds the threshold to a secondary logger on each node
sql.metrics.index_usage_stats.enabled
booleantruecollect per index usage statistics
sql.metrics.max_mem_reported_stmt_fingerprints
integer100000the maximum number of reported statement fingerprints stored in memory
sql.metrics.max_mem_reported_txn_fingerprints
integer100000the maximum number of reported transaction fingerprints stored in memory
sql.metrics.max_mem_stmt_fingerprints
integer100000the maximum number of statement fingerprints stored in memory
sql.metrics.max_mem_txn_fingerprints
integer100000the maximum number of transaction fingerprints stored in memory
sql.metrics.statement_details.dump_to_logs
booleanfalsedump collected statement statistics to node logs when periodically cleared
sql.metrics.statement_details.enabled
booleantruecollect per-statement query statistics
sql.metrics.statement_details.gateway_node.enabled
booleantruesave the gateway node for each statement fingerprint. If false, the value will be stored as 0.
sql.metrics.statement_details.index_recommendation_collection.enabled
booleantruegenerate an index recommendation for each fingerprint ID
sql.metrics.statement_details.max_mem_reported_idx_recommendations
integer5000the maximum number of reported index recommendation info stored in memory
sql.metrics.statement_details.plan_collection.enabled
booleanfalseperiodically save a logical plan for each fingerprint
sql.metrics.statement_details.plan_collection.period
duration5m0sthe time until a new logical plan is collected
sql.metrics.statement_details.threshold
duration0sminimum execution time to cause statement statistics to be collected. If configured, no transaction stats are collected.
sql.metrics.transaction_details.enabled
booleantruecollect per-application transaction statistics
sql.multiple_modifications_of_table.enabled
booleanfalseif true, allow statements containing multiple INSERT ON CONFLICT, UPSERT, UPDATE, or DELETE subqueries modifying the same table, at the risk of data corruption if the same row is modified multiple times by a single statement (multiple INSERT subqueries without ON CONFLICT cannot cause corruption and are always allowed)
sql.multiregion.drop_primary_region.enabled
booleantrueallows dropping the PRIMARY REGION of a database if it is the last region
sql.notices.enabled
booleantrueenable notices in the server/client protocol being sent
sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabled
booleanfalseif enabled, uniqueness checks may be planned for mutations of UUID columns updated with gen_random_uuid(); otherwise, uniqueness is assumed due to near-zero collision probability
sql.schema.telemetry.recurrence
string@weeklycron-tab recurrence for SQL schema telemetry job
sql.spatial.experimental_box2d_comparison_operators.enabled
booleanfalseenables the use of certain experimental box2d comparison operators
sql.stats.automatic_collection.enabled
booleantrueautomatic statistics collection mode
sql.stats.automatic_collection.fraction_stale_rows
float0.2target fraction of stale rows per table that will trigger a statistics refresh
sql.stats.automatic_collection.min_stale_rows
integer500target minimum number of stale rows per table that will trigger a statistics refresh
sql.stats.cleanup.recurrence
string@hourlycron-tab recurrence for SQL Stats cleanup job
sql.stats.flush.enabled
booleantrueif set, SQL execution statistics are periodically flushed to disk
sql.stats.flush.interval
duration10m0sthe interval at which SQL execution statistics are flushed to disk, this value must be less than or equal to 1 hour
sql.stats.forecasts.enabled
booleantruewhen true, enables generation of statistics forecasts by default for all tables
sql.stats.histogram_buckets.count
integer200maximum number of histogram buckets to build during table statistics collection
sql.stats.histogram_collection.enabled
booleantruehistogram collection mode
sql.stats.histogram_samples.count
integer10000number of rows sampled for histogram construction during table statistics collection
sql.stats.multi_column_collection.enabled
booleantruemulti-column statistics collection mode
sql.stats.non_default_columns.min_retention_period
duration24h0m0sminimum retention period for table statistics collected on non-default columns
sql.stats.persisted_rows.max
integer1000000maximum number of rows of statement and transaction statistics that will be persisted in the system tables
sql.stats.post_events.enabled
booleanfalseif set, an event is logged for every CREATE STATISTICS job
sql.stats.response.max
integer20000the maximum number of statements and transaction stats returned in a CombinedStatements request
sql.stats.response.show_internal.enabled
booleanfalsecontrols if statistics for internal executions should be returned by the CombinedStatements and if internal sessions should be returned by the ListSessions endpoints. These endpoints are used to display statistics on the SQL Activity pages
sql.stats.system_tables.enabled
booleantruewhen true, enables use of statistics on system tables by the query optimizer
sql.stats.system_tables_autostats.enabled
booleantruewhen true, enables automatic collection of statistics on system tables
sql.telemetry.query_sampling.enabled
booleanfalsewhen set to true, executed queries will emit an event on the telemetry logging channel
sql.temp_object_cleaner.cleanup_interval
duration30m0show often to clean up orphaned temporary objects
sql.temp_object_cleaner.wait_interval
duration30m0show long after creation a temporary object will be cleaned up
sql.trace.log_statement_execute
booleanfalseset to true to enable logging of executed statements
sql.trace.session_eventlog.enabled
booleanfalseset to true to enable session tracing; note that enabling this may have a negative performance impact
sql.trace.stmt.enable_threshold
duration0senables tracing on all statements; statements executing for longer than this duration will have their trace logged (set to 0 to disable); note that enabling this may have a negative performance impact; this setting applies to individual statements within a transaction and is therefore finer-grained than sql.trace.txn.enable_threshold
sql.trace.txn.enable_threshold
duration0senables tracing on all transactions; transactions open for longer than this duration will have their trace logged (set to 0 to disable); note that enabling this may have a negative performance impact; this setting is coarser-grained than sql.trace.stmt.enable_threshold because it applies to all statements within a transaction as well as client communication (e.g. retries)
sql.ttl.default_delete_batch_size
integer100default amount of rows to delete in a single query during a TTL job
sql.ttl.default_delete_rate_limit
integer0default delete rate limit for all TTL jobs. Use 0 to signify no rate limit.
sql.ttl.default_select_batch_size
integer500default amount of rows to select in a single query during a TTL job
sql.ttl.job.enabled
booleantruewhether the TTL job is enabled
sql.txn_fingerprint_id_cache.capacity
integer100the maximum number of txn fingerprint IDs stored
storage.max_sync_duration
duration20smaximum duration for disk operations; any operations that take longer than this setting trigger a warning log entry or process crash
storage.max_sync_duration.fatal.enabled
booleantrueif true, fatal the process when a disk operation exceeds storage.max_sync_duration
timeseries.storage.enabled
booleantrueif set, periodic timeseries data is stored within the cluster; disabling is not recommended unless you are storing the data elsewhere
timeseries.storage.resolution_10s.ttl
duration240h0m0sthe maximum age of time series data stored at the 10 second resolution. Data older than this is subject to rollup and deletion.
timeseries.storage.resolution_30m.ttl
duration2160h0m0sthe maximum age of time series data stored at the 30 minute resolution. Data older than this is subject to deletion.
trace.debug.enable
booleanfalseif set, traces for recent requests can be seen at https://<ui>/debug/requests
trace.jaeger.agent
stringthe address of a Jaeger agent to receive traces using the Jaeger UDP Thrift protocol, as <host>:<port>. If no port is specified, 6381 will be used.
trace.opentelemetry.collector
stringaddress of an OpenTelemetry trace collector to receive traces using the otel gRPC protocol, as <host>:<port>. If no port is specified, 4317 will be used.
trace.snapshot.rate
duration0sif non-zero, interval at which background trace snapshots are captured
trace.span_registry.enabled
booleantrueif set, ongoing traces can be seen at https://<ui>/#/debug/tracez
trace.zipkin.collector
stringthe address of a Zipkin instance to receive traces, as <host>:<port>. If no port is specified, 9411 will be used.
version
version22.2set the active cluster version in the format '<major>.<minor>'

View current cluster settings

Use the SHOW CLUSTER SETTING statement.

Change a cluster setting

Use the SET CLUSTER SETTING statement.

Before changing a cluster setting, note the following:

  • Changing a cluster setting is not instantaneous, as the change must be propagated to other nodes in the cluster.

  • Do not change cluster settings while upgrading to a new version of CockroachDB. Wait until all nodes have been upgraded before you make the change.

See also


Yes No