Elasticsearch: FORBIDDEN/12/index read-only / allow delete (api) error

Elasticsearch considers available disk space to calculate and allocate shard on that node. if there is less space left on disk, Elasticsearch put itself into read-only mode.
By default these setting are enabled in Elasticsearch.

cluster.routing.allocation.disk.threshold_enabled: By default its true and will enable following settings.

cluster.routing.allocation.disk.watermark.low: Default to 85%, which means, elastic search will not create more shards on the node with more than 85% disk space used.

cluster.routing.allocation.disk.watermark.high: Default to 90%, which means, Elasticsearch will try to move shard from node with 90% or more disk spaced used.

cluster.routing.allocation.disk.watermark.flood_stage: Default to 95%, which means, Elasticsearch will enforce read-only mode to all the index that has one or more shard on any of the node with 95% disk space used.

Solution:

Free up some disk space: If possible, free up disk spaced so that free space be more than 5%. After disk is freed up need to unlock read only access.

PUT /twitter/_settings
{
“index.blocks.read_only_allow_delete”: null
}

Disable or change settings: We can change watermark setting to low value, example of settings are as below.

PUT _cluster/settings

{

“transient”: {

“cluster.routing.allocation.disk.watermark.low”: “100gb”,

“cluster.routing.allocation.disk.watermark.high”: “50gb”,

“cluster.routing.allocation.disk.watermark.flood_stage”: “10gb”,

“cluster.info.update.interval”: “1m”

}

more information on this issue can be found here: https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html

Elasticsearch: Cluster Setup

Elasticsearch requires very little configuration. We can run it as stand alone, just by installing it without any config change. But in case of cluster setup we need to do minor config changes.

Elastic.co provides comprehensive setup instruction which can be found here

Before we start:

Master node: Responsible for cluster management like discovering healthy nodes and adding or removing them from a cluster.

Data node: Stores data and runs search and aggregation.

node name / cluster name: Elasticsearch will assign node / cluster name automatically, but its better to change it, for better visibility and understanding.

Configuring cluster:

Lets setup cluster with 3 nodes, 1 master and two data nodes. Let assume master node IP as 192.168.100.10 and two data nodes IP’s are 192.168.100.20, 192.168.100.30 receptively.

On Master node:

Sudo vi /etc/elasticsearch/elasticsearch.yml

 cluster.name: "sit_env"
 #node name, please change it as required.
 node.name: "sit_node_1" 
 #path where data will be saved
 path.data: /var/lib/elasticsearch
 #path for log file, can be changed as required.
 path.logs: /var/log/elasticsearch
 node.master: true
 node.data: false
 #default number of shards
 index.number_of_shards:6
 #default number of replicas 1 ( total data nodes - 1)
 index.number_of_replicas:1
 #IP address of host.
 network.host: 192.168.100.10
 #port
 http.port:9200
 #path for backup snapshot to be saved.
 path.repo: /mnt/elasticsearch/elasticsearch_backup
 #IP of all nodes
 discovery.zen.ping.unicast.hosts: ["192.168.100.10","192.168.100.20", "192.168.100.30"]

save and exit

sudo service elasticsearch restart

Data node 1:

 cluster.name: "sit_env"
 #node name, please change it as required.
 node.name: "sit_node_2" 
 #path where data will be saved
 path.data: /var/lib/elasticsearch
 #path for log file, can be changed as required.
 path.logs: /var/log/elasticsearch
 node.master: true
 node.data: true
 #default number of shards
 index.number_of_shards:6
 #default number of replicas 1 ( total data nodes - 1)
 index.number_of_replicas:1
 #IP address of host.
 network.host: 192.168.100.20
 #port
 http.port:9200
 #path for backup snapshot to be saved.
 path.repo: /mnt/elasticsearch/elasticsearch_backup
 #IP of all nodes
 discovery.zen.ping.unicast.hosts: ["192.168.100.10","192.168.100.20", "192.168.100.30"] 

save and exit

sudo service elasticsearch restart

Data node 2:

 cluster.name: "sit_env"
 #node name, please change it as required.
 node.name: "sit_node_3" 
 #path where data will be saved
 path.data: /var/lib/elasticsearch
 #path for log file, can be changed as required.
 path.logs: /var/log/elasticsearch
 node.master: true
 node.data: true
 #default number of shards
 index.number_of_shards:6
 #default number of replicas 1 ( total data nodes - 1)
 index.number_of_replicas:1
 #IP address of host.
 network.host: 192.168.100.30
 #port
 http.port:9200
 #path for backup snapshot to be saved.
 path.repo: /mnt/elasticsearch/elasticsearch_backup
 #IP of all nodes
 discovery.zen.ping.unicast.hosts: ["192.168.100.10","192.168.100.20", "192.168.100.30"]  

save and exit

sudo service elasticsearch restart