Install Kibana
yum remove kibana (This will remove the kibana and kibana config which exist earlier)
Download Kibana RPM from the below link
https://www.elastic.co/guide/en/kibana/5.4/rpm.html
> touch /var/log/kibana.log
> chown -R kibana:kibana /var/log/kibana.log
> vi /etc/kibana/kibana.yml
server.port: 5601
server.host: "HOST_NAME"
server.name: "SERVER_NAME"
elasticsearch.url: "http://ES_URL:9200"
logging.dest: /var/log/kibana.log
Start Kibana & Status on CentOS 7
> systemctl start kibana
> systemctl status kibana
Check the log
> journalctl -u kibana
Check the Kibana port open
nc -z localhost 5601
Install Cerebro
> wget https://github.com/lmenezes/cerebro/releases/download/v0.8.3/cerebro-0.8.3.tgz
> tar -xvf cerebro-0.8.3.tgz
Edit cerebro service file in centOS 7 and update the location where cerebro is extracted
> vim /etc/systemd/system/cerebro.service
Edit cerebro configuration and update the Elasticsearch config which it needs to have lookup
> vim cerebro-0.8.3/conf/application.conf
Update the ES host server, port and ES Cluster name by enabling
> systemctl start cerebro
> systemctl status cerebro
Or manually start
> ./bin/cerebro -Dhttp.port=1234 -Dhttp.address=127.0.0.1
Check the log
> journalctl -u cerebro
Check the Cerebro port open
nc -z localhost 9000
Install Elasticsearch
Download from the link:
https://www.elastic.co/guide/en/elasticsearch/reference/5.4/gs-installation.html
Increase the file max in your linux
> sysctl -w fs.file-max=500000
Linux Scheduler (Cron) to purge the ES logs based on the cluster name
> 0 * * * * /bin/bash -c "/bin/find /var/log/elasticsearch -type f | /bin/grep -Pi '(ent\-stage1\-es)(\d{1,4}\-?){1,3}' | /bin/xargs rm -f"
Increase the JVM for ES (Min and Max memory)
> vim /etc/elasticsearch/jvm.options
-Xms8GB
-Xmx8GB
Modify the ES Configuration
> vim /etc/elasticsearch/elasticsearch.yml
cluster.name: ent-stage1-es
node.master: true
node.data: false (For Data Nodes keep this flag to true)
node.ingest: false (For Ingest Nodes keep this flag to true)
path.data: /mnt/elasticsearch-data
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
Configure only the master, datanodes will be linked based on the cluster name we keep across the servers
discovery.zen.ping.unicast.hosts: ["ES_MASTER_NODE_IP1", "ES_MASTER_NODE_IP2", "ES_MASTER_NODE_IP3"]
Configure 1 or 3 or 5 Master nodes in Odd numbers to avoid split brain
discovery.zen.minimum_master_nodes: 1
Note : a) Cluster name should be same in Master Node, DataNodes and other nodes. Keep master true for master node and others false. Keep the master either 1 or 3 to avoid split brain. Refer ES documentation to know more about it.
Master Node: Keep only node.master: true if you want master node
Data Node: Keep only node.data: true if you want data node
Ingest Node: Keep only node.ingest: true if you want ingest node
Co-ordinate Node: Keep node.master: false, node.data: false and node.ingest: false and dedicated node will be available for co-ordination as co-ordinate node.
> systemctl start elasticsearch
> systemctl status elasticsearch
Check the log
> journalctl -u elasticsearch
Check the ElasticSearch port open
nc -z localhost 9200
Add or Remove x-pack in ES
> cd /usr/share/elasticsearch/bin/
./elasticsearch-plugin <install|remove> x-pack
Exclude / Decommission a node
First run the below query in kibana. Once the shards gets moved to other data nodes than shutdown the elasticsearch on that node. You can use cerebro overview to click the node and see the docs availability in the instance if any.
PUT _cluster/settings
{
"transient" :{
"cluster.routing.allocation.exclude._ip" : "<IP_ADDRESS>"
}
}
Note: If master and data been the same node still you can run the above settings to move the shards from master node to data node.
Adding a node (Data)
Install ES with the same version on the node same as other nodes in the cluster and keep the cluster.name in the elasticsearch.yml file same as other nodes, other fields node.data: true and bring the ES up in new node. So, based on the cluster name new node attach to the existing cluster.
Adding a node (Master):
Install ES with the same version on the node same as other nodes in the cluster and keep the cluster.name in the elasticsearch.yml file same as other nodes, other fields node.master: true and bring the ES up in new node. So, based on the cluster name new node attach to the existing cluster.
You may need to edit the elasticsearch.yml file add the new node (master) ip in the discovery.zen.ping.unicast.hosts and increment the discovery.zen.minimum_master_nodes.
System Configuration Suggestion
1) Try to have r5 instance types for Master nodes and i3 for data nodes. As master node need more memory and i3 for data has more IOPS.
yum remove kibana (This will remove the kibana and kibana config which exist earlier)
Download Kibana RPM from the below link
https://www.elastic.co/guide/en/kibana/5.4/rpm.html
> touch /var/log/kibana.log
> chown -R kibana:kibana /var/log/kibana.log
> vi /etc/kibana/kibana.yml
server.port: 5601
server.host: "HOST_NAME"
server.name: "SERVER_NAME"
elasticsearch.url: "http://ES_URL:9200"
logging.dest: /var/log/kibana.log
Start Kibana & Status on CentOS 7
> systemctl start kibana
> systemctl status kibana
Check the log
> journalctl -u kibana
Check the Kibana port open
nc -z localhost 5601
Install Cerebro
> wget https://github.com/lmenezes/cerebro/releases/download/v0.8.3/cerebro-0.8.3.tgz
> tar -xvf cerebro-0.8.3.tgz
Edit cerebro service file in centOS 7 and update the location where cerebro is extracted
> vim /etc/systemd/system/cerebro.service
Edit cerebro configuration and update the Elasticsearch config which it needs to have lookup
> vim cerebro-0.8.3/conf/application.conf
Update the ES host server, port and ES Cluster name by enabling
> systemctl start cerebro
> systemctl status cerebro
Or manually start
> ./bin/cerebro -Dhttp.port=1234 -Dhttp.address=127.0.0.1
Check the log
> journalctl -u cerebro
Check the Cerebro port open
nc -z localhost 9000
Install Elasticsearch
Download from the link:
https://www.elastic.co/guide/en/elasticsearch/reference/5.4/gs-installation.html
Increase the file max in your linux
> sysctl -w fs.file-max=500000
Linux Scheduler (Cron) to purge the ES logs based on the cluster name
> 0 * * * * /bin/bash -c "/bin/find /var/log/elasticsearch -type f | /bin/grep -Pi '(ent\-stage1\-es)(\d{1,4}\-?){1,3}' | /bin/xargs rm -f"
Increase the JVM for ES (Min and Max memory)
> vim /etc/elasticsearch/jvm.options
-Xms8GB
-Xmx8GB
Modify the ES Configuration
> vim /etc/elasticsearch/elasticsearch.yml
cluster.name: ent-stage1-es
node.master: true
node.data: false (For Data Nodes keep this flag to true)
node.ingest: false (For Ingest Nodes keep this flag to true)
path.data: /mnt/elasticsearch-data
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
Configure only the master, datanodes will be linked based on the cluster name we keep across the servers
discovery.zen.ping.unicast.hosts: ["ES_MASTER_NODE_IP1", "ES_MASTER_NODE_IP2", "ES_MASTER_NODE_IP3"]
Configure 1 or 3 or 5 Master nodes in Odd numbers to avoid split brain
discovery.zen.minimum_master_nodes: 1
Note : a) Cluster name should be same in Master Node, DataNodes and other nodes. Keep master true for master node and others false. Keep the master either 1 or 3 to avoid split brain. Refer ES documentation to know more about it.
Master Node: Keep only node.master: true if you want master node
Data Node: Keep only node.data: true if you want data node
Ingest Node: Keep only node.ingest: true if you want ingest node
Co-ordinate Node: Keep node.master: false, node.data: false and node.ingest: false and dedicated node will be available for co-ordination as co-ordinate node.
> systemctl start elasticsearch
> systemctl status elasticsearch
Check the log
> journalctl -u elasticsearch
Check the ElasticSearch port open
nc -z localhost 9200
Add or Remove x-pack in ES
> cd /usr/share/elasticsearch/bin/
./elasticsearch-plugin <install|remove> x-pack
Exclude / Decommission a node
First run the below query in kibana. Once the shards gets moved to other data nodes than shutdown the elasticsearch on that node. You can use cerebro overview to click the node and see the docs availability in the instance if any.
PUT _cluster/settings
{
"transient" :{
"cluster.routing.allocation.exclude._ip" : "<IP_ADDRESS>"
}
}
Note: If master and data been the same node still you can run the above settings to move the shards from master node to data node.
Adding a node (Data)
Install ES with the same version on the node same as other nodes in the cluster and keep the cluster.name in the elasticsearch.yml file same as other nodes, other fields node.data: true and bring the ES up in new node. So, based on the cluster name new node attach to the existing cluster.
Adding a node (Master):
Install ES with the same version on the node same as other nodes in the cluster and keep the cluster.name in the elasticsearch.yml file same as other nodes, other fields node.master: true and bring the ES up in new node. So, based on the cluster name new node attach to the existing cluster.
You may need to edit the elasticsearch.yml file add the new node (master) ip in the discovery.zen.ping.unicast.hosts and increment the discovery.zen.minimum_master_nodes.
System Configuration Suggestion
1) Try to have r5 instance types for Master nodes and i3 for data nodes. As master node need more memory and i3 for data has more IOPS.
No comments :
Post a Comment