{% include collecttags.html %}
I was asked to build a box with all the monitoring tools required. Chef, Logstash, Ganglia and Monit were selected. Here’s a core dump.
In this post I’m going to describe how to setup a box containing following elements:
Opsworks Chef Server
Logstash indexer
withRedis
as an incoming queueKibana
Ganglia collector
Monit
with the log shipped to logstashM/Monit
I have this nasty habit - this whole thing is running as
root
. Well,Chef Server
isn’t, it creates it’s own user ane group. But everything else is…
All of those components will sit behind nginx SSL. Components that don’t provide authentication will be secured with HTTP basic auth.
We are going to start with Chef Server
. The whole process is nicely explained in the Chef documentation. But it is scattered across a number of pages, some sort of condensed knowledge is always nice to have.
§Chef Server
Start with creating an A
record for chef.[your domain name]
pointing to your box. Then SSH
to the box.
Set some initial values, the password in the first line doesn’t really matter, it has to be changed upon first login:
CHEF_WEBUI_ADMIN_PASSWORD=somerandompassword
CHEF_RABBITMQ_CONSUMER_PASSWORD=[use-some-strong-password-here]
CHEF_WEBUI_HOST=http://chef.[your domain name]:4000
set -e -x
export HOME="/root"
export DEBIAN_FRONTEND=noninteractive
We have to install debconf-util to be able to provide the settings dutring installation. And git, we will need it later…
apt-get install -q -y debconf-utils git-core
Follow the steps from Chef docs:
echo "deb http://apt.opscode.com/ `lsb_release -cs`-0.10 main" | sudo tee /etc/apt/sources.list.d/opscode.list
mkdir -p /etc/apt/trusted.gpg.d
gpg --keyserver keys.gnupg.net --recv-keys 83EF826A
gpg --export packages@opscode.com | sudo tee /etc/apt/trusted.gpg.d/opscode-keyring.gpg > /dev/null
apt-get update
apt-get install -q -y opscode-keyring
apt-get -y upgrade
This is where we add the additional steps. Set those settings to be able to do noninteractive setup:
echo chef-server-webui chef-server-webui/admin_password password $CHEF_WEBUI_ADMIN_PASSWORD | debconf-set-selections
echo chef-solr chef-solr/amqp_password password $CHEF_RABBITMQ_CONSUMER_PASSWORD | debconf-set-selections
echo chef chef/chef_server_url string $CHEF_WEBUI_HOST | debconf-set-selections
apt-get -q -y install chef chef-server
When this is done, and it can take a bit, you can go http://[your server]:4040
. Log in as admin
using $CHEF_WEBUI_ADMIN_PASSWORD
password. You will have to chnge it upon first login.
§nginx
I am sure you would like to see Chef
running. We may as well set up nginx
just now. Why not. If you have ports 4000
and 4040
opened for public access, close them. You won’t need them.
apt-get install -q -y nginx
Now we need a self-signed certificate so we can serve everything via SSL
:
mkdir -p /tmp/certs
cd /tmp/certs
You will have to type the password a number of times in this step. Just make sure it is always the same.
openssl genrsa -des3 -out myssl.key 1024
openssl req -new -key myssl.key -out myssl.csr
cp myssl.key myssl.key.org
openssl rsa -in myssl.key.org -out myssl.key
openssl x509 -req -days 365 -in myssl.csr -signkey myssl.key -out myssl.crt
cp myssl.crt /etc/ssl/certs/
cp myssl.key /etc/ssl/private/
Time to confiure nginx
:
touch /etc/nginx/sites-available/devops
ln -s /etc/nginx/sites-available/devops /etc/nginx/sites-enabled/devops
[vi|vim|nano|joe] /etc/nginx/sites-enabled/devops
Paste the following content:
upstream chef_api_local { server localhost:4000; }
upstream chef_webui_local { server localhost:4040; }
server {
server_name chef.[your domain name];
ssl on;
ssl_certificate /etc/ssl/certs/myssl.crt;
ssl_certificate_key /etc/ssl/private/myssl.key;
listen 443;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
root /var/www;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
location / {
# API request incoming
if ( $http_x_ops_timestamp != "" ){
proxy_pass http://chef_api_local;
break;
}
#webui request incoming
proxy_pass http://chef_webui_local;
}
}
Save the file and execute:
/etc/init.d/nginx stop
/etc/init.d/nginx start
The reload
command gives some incosistent results. It doesn’t like to work every single time.
Now you can simply go to https://chef.[your domain name]
.
§Kibana and logstash
Logstash will be used for logs aggregation. It can be set up in standalone or distributed mode. But standalone isn’t really “aggregation”. No?
In the distributed mode we are looking at two components. An indexer
and shipper
. Indexer is the end point. Somwehere where we see everything. Or where we ship to another indexer. Can be tricky, depending on the setup.
For now let’s just assume that we have “some shippers” and “an indexer”. Shipper
is really easy, we will focus on the indexer
for now. Bear with me.
Create an A
record for kibana.[your server]
.
Next, back on the server:
mkdir /opt/kibana
cd /opt/kibana
git clone --branch=kibana-ruby https://github.com/rashidkpc/Kibana.git .
gem install bundler
bundle install
Please note - Ruby
with all additional dependencies is already installed for us by Chef
. Now the maual stuff:
[vi|vim|nano|joe] /opt/kibana/KibanaConfig.rb
and change KibanaHost
to 0.0.0.0
. Save the file.
Now logstash
dependencies - ElasticSearch
comes first:
mkdir /opt/elasticsearch
cd /opt/elasticsearch
cd /tmp
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.20.5.tar.gz
tar xvf elasticsearch-0.20.5.tar.gz -C /opt/elasticsearch --strip 1
rm elasticsearch-0.20.5.tar.gz
and Redis
:
cd /tmp
wget http://redis.googlecode.com/files/redis-2.6.10.tar.gz
tar xvf redis-2.6.10.tar.gz
cd redis-2.6.10/
mkdir -p /opt/redis
make PREFIX=/opt/redis install
cp redis.conf /opt/redis/redis.conf
Edit the /opt/redis/redis.conf
file, change port
to 10000
, or whatever isn’t already in use… (hint: it is at the top of the file). For whatever reason 6379
default port is already used by something.
In a moment, when we start Redis
, it will become available to everyone. If this is a what the fuck
moment for you, read the red header section right below.
§Redis password: depending on where you run this setup
If you run on EC2
and you have your security rules based on security groups, I assume you know how to enable the port and you understand the restrictions around that solution. In this case you don’t have to set any passwords for Redis
.
If you have nothing like this handy, you can simply enable Redis
authentication. Or use firewall. If you decide to go for the password: edit redis.conf
again. Find the line starting with # requirepass
, uncomment, set strong password. Be careful when choosing the password with Redis
<= 2.6.10.
Make sure you also read this - Redis AUTH command.
§Upstart
We need 3 upstart
services for:
- Kibana
- ElasticSearch
- Redis
Ubuntu comes with upstart already installed, we simply need the files in /etc/init
.
§elasticsearch.conf
description "ElasticSearch"
start on filesystem and net-device-up IFACE=eth0
stop on shutdown
respawn
script
sudo -u root /opt/elasticsearch/bin/elasticsearch
end script
§kibana.conf
description "kibana"
start on filesystem and net-device-up IFACE=eth0
stop on shutdown
respawn
chdir /opt/kibana
script
exec ruby kibana.rb
end script
§redis.conf
description "redis"
start on filesystem and net-device-up IFACE=eth0
stop on shutdown
respawn
script
/opt/redis/bin/redis-server /opt/redis/redis.conf
end script
You must chmod +x
those files and start services 1 :
chmod +x /etc/init/elasticsearch.conf
chmod +x /etc/init/kibana.conf
chmod +x /etc/init/redis.conf
service elasticsearch start
service kibana start
service redis start
I don’t know why ElasticSearch must be started manually the first time. Just run:
/opt/elasticsearch/bin/elasticsearch
Which will run it as a service.
By default Kibana
runs on port 6501
. But we want it to run over SSL
. Edit /etc/nginx/sites-enabled/devops
. Add the following upstream, at the top of the file:
upstream kibana_local { server localhost:5601; }
And this at the bottom:
server {
server_name kibana.[your domain name];
ssl on;
ssl_certificate /etc/ssl/certs/myssl.crt;
ssl_certificate_key /etc/ssl/private/myssl.key;
listen 443;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
root /var/www;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
location / {
auth_basic "Resricted - Kibana";
auth_basic_user_file /etc/nginx/htpasswd;
proxy_pass http://kibana_local;
}
}
One thing to note over here is the use of basic authentication. We must create a user:
htpasswd -c -d /etc/nginx/htpasswd your-user-name
You will have to provide the password, 8 characters max. We will reuse the same htpasswd
for ganglia
a bit later.
Time to restart nginx
again.
/etc/init.d/nginx reload
You should now be able to go to https://kibana.[your domain name]
.
§Logstash indexer
This is really straightforward. Our logstash
will read from local Redis
. We will store all input/output configuration in a designated folder. By default our indexer will process incoming monit
logs.
mkdir -p /opt/logstash/conf.d
cd /opt/logstash
wget https://logstash.objects.dreamhost.com/release/logstash-1.1.9-monolithic.jar
Save this file in /opt/logstash/conf.d/monit.conf
:
input {
redis {
port => 10000
type => "monit-input"
key => "monit_logs"
data_type => "list"
format => "json_event"
}
}
output {
elasticsearch { host => "127.0.0.1" }
}
And create an upstart
service for it, save the file in /etc/init/logstash-indexer.conf
:
description "logstash indexer"
start on filesystem and net-device-up IFACE=eth0
stop on shutdown
respawn
chdir /opt/logstash
script
java -jar /opt/logstash/logstash-1.1.9-monolithic.jar agent -v -f /opt/logstash/conf.d/
end script
Start the service:
chmod +x /etc/init/logstash-indexer.conf
service logstash-indexer start
§Ganglia collector (master)
Create an A
record for ganglia.[your domain name]
. Point it to the server.
We need just a few packages more on the box:
apt-get install -q -y ganglia-monitor gmetad ganglia-webfrontend php5-cgi
Create a www-data
user and group:
mkdir -p /home/www-data
groupadd -g 3320 www-data
useradd -m -d /home/www-data -s /bin/bash -u 3320 -g 3320 www-data
chown www-data:www-data /home/www-data
chown -R www-data:www-data /usr/share/ganglia-webfrontend
Ganglia is written in PHP. Nginx needs FastCGI
to process PHP. We must create a FastCGI
service. Save this file in /etc/init.d/nginx-fastcgi
:
#!/bin/bash
BIND=127.0.0.1:9000
USER=www-data
PHP_FCGI_CHILDREN=15
PHP_FCGI_MAX_REQUESTS=1000
PHP_CGI=/usr/bin/php-cgi
PHP_CGI_NAME=`basename $PHP_CGI`
PHP_CGI_ARGS="- USER=$USER PATH=/usr/bin PHP_FCGI_CHILDREN=$PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=$PHP_FCGI_MAX_REQUESTS $PHP_CGI -b $BIND"
RETVAL=0
start() {
echo -n "Starting PHP FastCGI: "
start-stop-daemon --quiet --start --background --chuid "$USER" --exec /usr/bin/env -- $PHP_CGI_ARGS
RETVAL=$?
echo "$PHP_CGI_NAME."
}
stop() {
echo -n "Stopping PHP FastCGI: "
killall -q -w -u $USER $PHP_CGI
RETVAL=$?
echo "$PHP_CGI_NAME."
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
*)
echo "Usage: php-fastcgi {start|stop|restart}"
exit 1
;;
esac
exit $RETVAL
Execute:
chmod +x /etc/init.d/nginx-fastcgi
update-rc.d /etc/init.d/nginx-fastcgi
/etc/init.d/nginx-fastcgi start
Final change in the /etc/nginx/sites-enabled/devops
file. Add this at the bottom:
server {
server_name ganglia.[your domain name];
ssl on;
ssl_certificate /etc/ssl/certs/myssl.crt;
ssl_certificate_key /etc/ssl/private/myssl.key;
listen 443;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location / {
auth_basic "Resricted - Ganglia";
auth_basic_user_file /etc/nginx/htpasswd;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /usr/share/ganglia-webfrontend$fastcgi_script_name;
}
}
And restart nginx for the final time.
/etc/init.d/nginx reload
§M/Monit
Installing M/Monit is really easy.
mkidr /opt
wget http://mmonit.com/dist/mmonit-2.4-linux-x64.tar.gz
tar xvf mmonit-2.4-linux-x64.tar.gz
ln -s /opt/mmonit-2.4 /opt/mmonit
rm mmonit-2.4-linux-x64.tar.gz
Upstart job, save it as /etc/init/mmonit.conf
:
description "mmonit"
start on filesystem and net-device-up IFACE=eth0
stop on shutdown
respawn
script
/opt/mmonit/bin/mmonit start
end script
pre-stop script
/opt/mmonit/bin/mmonit stop
end script
Make it executable and run:
chmod +x /etc/init/mmonit.conf
service mmonit start
Create A
record for mmonit.[your domain name]
and update nginx
config, at the top of the file add:
upstream mmonit_local { server localhost:8080; }
And then, at the bottom:
server {
server_name mmonit.[your domain name];
ssl on;
ssl_certificate /etc/ssl/certs/myssl.crt;
ssl_certificate_key /etc/ssl/private/myssl.key;
listen 443;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
root /var/www;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
location / {
proxy_pass http://mmonit_local;
}
}
Reload nginx
config:
/etc/init.d/nginx reload
§Wrap up
In this post we have covered setting up a collector box. We now have:
https://chef.[your domain name]
, runs thr API and the WebUI, supports it’s own authentication andOpenID
https://kibana.[your domain name]
, web frontend forlogstash
, protected with basic authenticationhttps://ganglia.[your domain name]
, web frontend forganglia
, protected with basic authentication, uses the same credentials as kibanahttps://mmonit.[your domain name]
, web frontend formmonit
, protected with it’s own authentication
We have also limited a number of ports opened on the firewall / security group. These have to be opened:
22 (SSH)
443 (HTTPS)
8080 (HTTP)
- M/Monit still redirects to 808010000 (Redis)
In the next post I’m going to cover setting up knife
so we can start deploying the infrastructure with it. Our infrastructure is going to contain:
- monit recipe
- ganglia client recipe
- logstash shipper recipe
We will use those to bootstrap a core setup
box.
§Useful links
Chef:
- Installing Chef Server on Ubuntu using Packages
- Chef Resources
- A Brief Chef Tutorial (From Concentrate)
Logstash:
Nginx: