Wednesday, 17 June 2015

Elasticsearch and mappings


Testing out Kibana for a spesific use-case and wanted to geo code data, however this was not "as easy" as 1..2..3 but once you know the little tricks / pre-requisites a working demo can be configured within minutes.

The flow of events was as follows:

Some observations and future posts perhaps:
  • know your source data and what you want to achieve
  • Learn grok, a great tool to debug grok patterns
  • Standardise you message formats
  • An elasticsearch cluster really needs 3 nodes (split-brain issue ..)
  • schema-less is not really schema-less and "mapping" fields needs some practice to get used to.
  • geo data for visualization in Kibana needs to be mapped (the type set) to "geo_point" as auto classify see it as text/string.

The focus of this post is on creating the correct mapping for map visualizations.

Logstash:

Install  GeoIP database (MaxMind free for the prove of concept)

cd /etc/logstash 
sudo curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz" 
sudo gunzip GeoLiteCity.dat.gz

This will be used as the geoip database value in the logstash configuration file (/etc/logstash/conf.d/*)

Do the filtering of the input stream via grok filter and patterns and make sure to assign the IP address values you want to map to a field.

Specify that field as the source in geoip filter.

Sample filter configuration:

Loading ....

Then send the output to elasticsearch host or cluster and the correct index in the output.

Test your logstash configuration with:

$ sudo /opt/logstash/bin/logstash --configtest --config /etc/logstash/conf.d/01-src_ip-mikrotik.conf

Configuration OK - means all ok and good to go.

Elasticsearch:


Download and install elasticsearch
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.6.0.deb 
sudo dpkg -i elasticsearch-1.6.0.deb

Configure the settings in /etc/elasticsearch/elasticsearch.yml; for a prove of concept the minimum settings needed (assuming broadcast traffic is allowed between the elasticsearch nodes) are
cluster.name: Africa
node.name: "Elephant"
path.data: /media/raid/elasticsearch
discovery.zen.minimum_master_nodes: 2

and start up elasticsearch
sudo service elasticsearch start

Mapping: 

At last we get to mapping, now under normal circumstances you would:
  1. Configure and make sure your device is pushing the data (syslog in this instance) to logstash
  2. Start logstash
  3. Logstash will 
    • ingest the data as configured in "input"
    • apply the grok filters - to assign values to fields
    • use the IP address field (src_ip in this example) and feed it to geoip
    • geoip will retrieve data from GeoIP database and add the values to the geoip fields
    • return the results as configured in the "output"
  4.  Elasticsearch will receive the data and Automagically create types for each field (this is called dynamic mapping) as soon as it receives the first message.
  5. Search and do whatever you want to do at this stage (Install and open kibana, add index pattern and analyse ..)
However the following holds true;  
You can ONLY specify the mapping for a type when you first create an index.
New fields can be addedafter the index was created but existing ones cannot be changed.

A good source to read here

Now to create a field called location (containing LON & LAT coordinates) with a type of geo_point to use in Kibana for map visualization we need to do the following and in this order:

  1. STOP all sources that push data to elasticsearch for the specific index. (in our example: stop logstash)
  2. DELETE the index (Remember we can't update existing field types)
  3. MAP the new index (PUT)
  4. Confirm the mapping (GET)
  5. Start logstash and feed elasticsearch data
  6. Enjoy ....
You should now have a running elasticsearch index with the correct field mapped as geo_point type that you can use in Kibana to create a map.

I use a Firefox add-on called RESTClient to send the rest command to elasticsearch.

Sample commands to perform the above steps: (used tester as the index name)

To delete the index:

DELETE /tester












To create the new mapping:

PUT /tester
















with the source of the mapping (Body):


Loading ....


To view / confirm the new mapping:
 
GET /tester/_mapping























Till a next time ....


Saturday, 13 June 2015

Kubuntu DNS settings

Seem like DNS settings are no longer stored in resolv.conf but this can still be extracted from the netwok Manager CLI.


nmcli dev show [device] e.g. nmcli dev show wlan0


This will return information including:

IP4.DNS[1]:
IP4.DOMAIN[1]:

Therefore a quick awk can return the value needed:


nmcli dev show wlan0 | grep IP4.DNS | awk -F: '{ print $2 }' | awk '{ print $1 }'



Monday, 8 June 2015

Graylog2 not starting





I had a issue recently where I had to hard reset a server, afterwards Graylog2 would not start up.
Upon investigation it was found that mongoDB did not start up, this was due to a .lock file still present (not cleanly removed on reset)

to remove the lock:
sudo rm /var/lib/mongodb/mongod.lock

and then start mongodb with


sudo service mongodb start

If it allocates a process id (pid) to it you know it is up and running, can be further confirmed with 
sudo service mongodb status