Create Rancher Environment
Go to [Manage Environments] => [Add Environment], set the environment name, select [Cattle] template, then hit [Create].
Prepare the New Environment
After the creation finished, add host to the environment. Rancher then will create some infrastructure stacks including 'healthcheck', 'ipsec', 'network-service', 'scheduler' which are for cluster networking infra.
Then create a new infra stack via [Add from Catalog] called nfs. All the elastic stack cluster configs will be stored in rancher-nfs.
Before adding nfs stack on rancher, a nfs service on VM need to be set first.
# Install & start nfs service on ELK01
yum install nfs-utils
systemctl enable rpcbind
systemctl enable nfs
systemctl start rpcbind
systemctl start nfs
# Add mount information to /etc/exports
vi /etc/exports
/opt/nfs x.x.x.0/24(rw,sync,no_root_squash,no_subtree_check)
systemctl restart nfs
# Check the nfs service status
showmount -e localhost
Upon creating surface, set the NFS Server to nfs server ip address, Export Base Directory to nfs dir(/opt/nfs), other options remain default.
Create Elasticsearch Stack
First create volumes needed by elastic services: elasticsearch_client
, elasticsearch_master
, elasticserach_backup
.
Then write vm.max_map_count=262144
to /etc/sysctl.conf.
Create Elasticsearch Stack
docker-compose.yml:
version: '2'
services:
elasticsearch-master:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
stdin_open: true
volumes:
- elasticsearch_master:/usr/share/elasticsearch
- es_backup:/usr/share/elasticsearch/es_backup
- /etc/localtime:/etc/localtime:ro
tty: true
links:
- elasticsearch-client:elasticsearch-client
- elasticsearch-data:elasticsearch-data
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
elasticsearch-data:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
stdin_open: true
volumes:
- /opt/elasticsearch_data/elasticsearch:/usr/share/elasticsearch
- es_backup:/usr/share/elasticsearch/es_backup
- /etc/localtime:/etc/localtime:ro
tty: true
links:
- elasticsearch-client:elasticsearch-client
- elasticsearch-master:elasticsearch-master
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
elasticsearch-client:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
environment:
NETWORK_ADDR: 0.0.0.0
stdin_open: true
volumes:
- elasticsearch_client:/usr/share/elasticsearch
- es_backup:/usr/share/elasticsearch/es_backup
- /etc/localtime:/etc/localtime:ro
tty: true
links:
- elasticsearch-master:elasticsearch-master
- elasticsearch-data:elasticsearch-data
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
rancher-compose.yml:
version: '2'
services:
elasticsearch-master:
start_on_create: true
elasticsearch-data:
start_on_create: true
elasticsearch-client:
start_on_create: true
During the creation of docker containers, there may be errors like can't find bin/elasticsearch
. To solve this, manually download the Linux install files from elastic.co, put it to the /opt/elasticsearch, then change some file privileges: chmod 775 config logs data
.
Change ES Configs
elasticsearch_master/config/elasticsearch.yml:
cluster.name: "rancher-es-pp"
network.host: 0.0.0.0
node.name: node_master
node.max_local_storage_nodes: 3
node.master: true
node.data: false
path.repo: ["/usr/share/elasticsearch/es_backup"]
transport.tcp.port: 9300
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-credentials: true
bootstrap.system_call_filter: false
#bootstrap.mlockall: true
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["elasticsearch-data","elasticsearch-client"]
xpack.security.http.ssl.enabled: false
elasticsearch_client/config/elasticsearch.yml:
cluster.name: "rancher-es-pp"
network.host: 0.0.0.0
node.name: node_client
node.max_local_storage_nodes: 3
node.master: false
node.data: false
node.ingest: true
path.repo: ["/usr/share/elasticsearch/es_backup"]
transport.tcp.port: 9300
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-credentials: true
bootstrap.system_call_filter: false
#bootstrap.mlockall: true
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["elasticsearch-data","elasticsearch-master"]
xpack.security.http.ssl.enabled: false
/opt/elasticsearch_data/elasticsearch/config/elasticsearch.yml:
cluster.name: "rancher-es-pp"
network.host: 0.0.0.0
bootstrap.system_call_filter: false
transport.tcp.port: 9300
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-credentials: true
# remember to change node.name on different nodes.
node.name: node_data_01
node.master: false
node.data: true
path.repo: ["/usr/share/elasticsearch/es_backup"]
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["elasticsearch-master","elasticsearch-client"]
xpack.security.http.ssl.enabled: false
thread_pool.bulk.queue_size: 1000
After making all the configurations, start the elasticsearch service. This part of job is finished here if no error in logs.
Create Logstash Stack
First create volumes needed by logstash services: logstash-conifg
, logstash-patterns
, logstash-pipeline
.
docker-compose.yml
version: '2'
volumes:
logstash:
external: true
driver: rancher-nfs
services:
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
stdin_open: true
external_links:
- elasticsearch/elasticsearch-client:elasticsearch-client
volumes:
- logstash-config:/usr/share/logstash/config
- logstash-pipeline:/usr/share/logstash/pipeline
- /etc/localtime:/etc/localtime:ro
- logstash-patterns:/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns
tty: true
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
rancher-compose.yml
version: '2'
services:
logstash:
start_on_create: true
Create Kibana Stack
First create volumes needed by kibana services: kibana-config
, kibana-plugins
.
docker-comose.yml
version: '2'
services:
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
stdin_open: true
external_links:
- elasticsearch/elasticsearch-client:elasticsearch-client
volumes:
- kibana-config:/usr/share/kibana/config
- kibana-plugins:/usr/share/kibana/plugins
- /etc/localtime:/etc/localtime:ro
tty: true
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
rancher-compose.yml
version: '2'
services:
kibana:
start_on_create: true
Remember to change kibana's config, change elasticsearch
to elasticsearch-client
.
Create Loadbalance Stack
docker-compose.yml
version: '2'
services:
loadbalance-elk:
image: rancher/lb-service-haproxy:v0.7.6
ports:
- 9200:9200/tcp
- 80:80/tcp
- 5044:5044/tcp
labels:
io.rancher.container.agent.role: environmentAdmin
io.rancher.container.create_agent: 'true'
io.rancher.scheduler.global: 'true'
rancher-compose.yml
version: '2'
services:
loadbalance-elk:
start_on_create: true
lb_config:
certs: []
port_rules:
- hostname: elasticsearch.example.com
priority: 1
protocol: http
service: elasticsearch/elasticsearch-client
source_port: 9200
target_port: 9200
- hostname: kibana.example.com
priority: 2
protocol: http
service: kibana/kibana
source_port: 80
target_port: 5601
- hostname: logstash.example.com
priority: 3
protocol: tcp
service: logstash/logstash
source_port: 5044
target_port: 5044
health_check:
healthy_threshold: 2
response_timeout: 2000
port: 42
unhealthy_threshold: 3
initializing_timeout: 60000
interval: 2000
reinitializing_timeout: 60000
After load balancer started, add IP kibana.example.com
to hosts file to visit kibana via browser.
Add Searchguard to Elastic Stack
Install SG Plugin for Elasticsearch
Download the Search Guard version matching your Elasticsearch version.
wget https://maven.search-guard.com/search-guard-release/com/floragunn/search-guard-6/6.8.0-25.5/search-guard-6-6.8.0-25.5.zip
Get into docker container and install the sg.
docker exec -it 831ab0eec415 /bin/sh
cd /usr/share/elasticsearch/bin
./elasticsearch-plugin install -b file:///usr/share/elasticsearch/search-guard-6-6.6.0-24.3.zip
ls ../plugins/search-guard-6/
Repeat the step above on every single es node. Master and client node only need to run once because their files are stored in nfs and can share with each other.
Generate Certificates via TLS Tool
Download search-guard-tlstool-1.6.zip
.
Create config file for certs.
ca:
root:
dn: CN=root.ca.example.com,OU=CA,O=example com\, Inc.,DC=example,DC=com
keysize: 2048
validityDays: 3650
pkPassword: xxxxx
file: root-ca.pem
intermediate:
dn: CN=signing.ca.example.com,OU=CA,O=example com\, Inc.,DC=example,DC=com
keysize: 2048
validityDays: 3650
pkPassword: xxxxx
defaults:
validityDays: 3650
pkPassword: auto
generatedPasswordLength: 12
httpsEnabled: true
nodes:
- name: node_client
dn: CN=node_client.example.com,OU=Ops,O=example com\, Inc.,DC=example,DC=com
- name: node_master
dn: CN=node_master.example.com,OU=Ops,O=example com\, Inc.,DC=example,DC=com
- name: node_data_01
dn: CN=node_data_01.example.com,OU=Ops,O=example com\, Inc.,DC=example,DC=com
- name: node_data_02
dn: CN=node_data_02.example.com,OU=Ops,O=example com\, Inc.,DC=example,DC=com
- name: node_data_03
dn: CN=node_data_03.example.com,OU=Ops,O=example com\, Inc.,DC=example,DC=com
clients:
- name: spock
dn: CN=spock.example.com,OU=Ops,O=example com\, Inc.,DC=example,DC=com
- name: kirk
dn: CN=kirk.example.com,OU=Ops,O=example com\, Inc.,DC=example,DC=cloud
admin: true
Generate certificates.
# yum install java
./sgtlstool.sh -c ../config/sg_az-test_config.yml -ca -crt
Certificated files are in out
directory.
Configure Elasticsearch to Enable Searchguard
Copy keys generated at above step to es nodes' config/keys directory.
cp /opt/nfs/search-guard-tlstool/tools/out/* /opt/nfs/elasticsearch_master/config/keys/
Find sg configs for es in /opt/nfs/search-guard-tlstool/tools/out/node_*_elasticsearch_config_snippet.yml
, then paste them into elasticsearch.yml
. Remember to change keys location to where they really are. Besides, need to comment searchguard.ssl.http.*
& add searchguard.ssl.http.enabled: false
, xpack.security.enabled: false
in the elasticsearch.yml
, these settings are conflict with default x-pack-plugin.
Restart all elasticsearch nodes.
Initial SearchGuard Plugin
# get into data-node-1(any node)
docker exec -it 7521ef6ca05a /bin/sh
cd /usr/share/elasticsearch/plugins/search-guard-6/sgconfig
# keypass can be found from client-certificates.readme
sh ../tools/sgadmin.sh -cacert ../../../config/keys/root-ca.pem -cert ../../../config/keys/kirk.pem -key ../../../config/keys/kirk.key -keypass xxxxx -nhnv -icl
Add Auth-info at Config Files of Kibana & Logstash
Edit /opt/nfs/kibana-config/kibana.yml
, add elasticsearch.username
& elasticsearch.password
.
Edit /opt/nfs/logstash-config/logstash.yml
, add xpack.monitoring.elasticsearch.username
& xpack.monitoring.elasticsearch.password
.
Install SG Plugin for Kibana
Download the Search Guard Kibana plugin version matching your Kibana version.
wget https://maven.search-guard.com/search-guard-kibana-plugin-release/com/floragunn/search-guard-kibana-plugin-6/6.8.0-19.0/search-guard-kibana-plugin-6-6.8.0-19.0.zip
Get into docker container and install the sg.
docker exec -it 5de0feff646b /bin/sh
bin/kibana-plugin install file:///usr/share/kibana/config/search-guard-kibana-plugin-6.6.0-18.4.zip
After installation finished, restart Kibana stack. When restarting, the Kibana service will do some optimization process which will cost some time. wait for it to complete. This process may make Kibana allocate more memories than ordinary situations.