Sample Kibana visualizations showing the results of the DNS logs pipeline described in this blog post tutorial.
Introduction
Recently I’ve been playing around with Pi-hole, an increasingly popular network adblocker that is designed to run on a Raspberry Pi. Pi-hole functions as your network’s DNS server, allowing it to block ad domains, malicious domains, and other domains (or TLD wildcards) that you add to its block lists -- effectively turning it into an open source, lightweight DNS sinkhole. This blocking occurs at the network level, meaning blocked resources never even reach your endpoint’s browser. Along with caching, this can increase website load performance and block ads that are difficult to block client-side such as in-app ads on an Android or iOS device.
Pi-hole also logs each DNS event, including domain resolutions and blocks. DNS logs are a gold mine that is sadly often overlooked for network defenders. Examples of malicious network traffic that can be identified in DNS logs include command and control (C2) traffic from a variety of malware including ransomware; malicious ads and redirects; exploit kits; phishing; typosquatting attacks; DNS hijacking; denial of service (DoS) attacks; and DNS tunneling.
While BIND and Windows DNS servers are perhaps more popular DNS resolver implementations, Pi-hole uses the very capable and lightweight dnsmasq as its DNS server. And while Pi-hole includes a nice web-based admin interface, I started to experiment with shipping its dnsmasq logs to the Elastic (AKA ELK) stack for security monitoring and threat hunting purposes. In the end, I quickly prototyped a Pi-hole based DNS sinkhole deployment, DNS log pipeline, and accompanying DNS log monitoring system thanks to Pi-hole’s dnsmasq implementation, the ELK (Elasticsearch, Logstash, and Kibana) stack, and Beats. This project is still a work in progress in my lab, but I thought I would share what I’ve learned so far. The steps are not difficult, but this guide assumes you have at least a basic familiarity with Linux commands, DNS logs, and the ELK stack.
Pi-hole
Pi-Hole is a DNS server / network adblocker / DNS sinkhole that is designed to run on a minimal hardware including the Raspberry Pi. I installed Pi-hole in a Ubuntu 16.04.3 Server VM. Typically Pi-hole runs fine with only 1 CPU core and 512 MB RAM, though I allocated more to account for log shipping overhead. Pi-hole is suitable for SOHO and SMB networks, with reports of success in networks containing 100s of endpoints.
Pi-hole installation and configuration are well documented elsewhere, so I won’t dwell on the details here. You can actually install Pi-hole with a 1-line command (curl -sSL https://install.pi-hole.net | bash), though of course it is always a good security practice to review the script before executing it. Running the install script walks you through the initial setup, where you can assign a static IP address to the Pi-hole server, choose your upstream DNS resolution service (I recommend a security and privacy oriented solution such as OpenDNS or Quad9), and enable the web admin interface.
Credit: https://pi-hole.net/
The admin password is displayed at the end of the install script, though you can always change it later. Once installed, you can review the excellent Pi-hole dashboard and take care of most administrative tasks by logging into its web interface at:
http://pi.hole/admin/.
The Pi-hole web interface and dashboard. You can also SSH into its server to manually edit config files and manage other changes.
Once Pi-hole is up and running, you need to point your endpoints to your Pi-hole server’s IP address (which should be static) so that they will use the Pi-hole for DNS resolution going forward. You can set this manually per device. You can also configure most routers to use the Pi-hole as the DNS server. Beyond functioning as your network’s DNS server, Pi-hole (again thanks to dnsmasq) can also be a DHCP server. There are various pros and cons such as endpoint IP visibility that apply to these different deployment options, so read up on Pi-hole’s relevant documentation for more details. By default, Pi-hole leverages several ad blocklists, though you are free to add your own lists and domains or wildcards via the web interface or command line.
Example of manually adding the zip gTLD as wildcard to Pi-hole’s black lists via the web interface.
DNS Logs Pipeline
By default, Pi-hole stores its dnsmasq logs at /var/log/pihole.log. Beyond glancing at the Dashboard metrics and top lists, there are several ways to manually review these logs, including the “Query Log” area of the web interface, “Tail pihole.log” under Tools in the web interface, and directly via SSH access to the underlying server running Pi-hole (e.g., tail /var/log/pihole.log -f). Based on the Dashboard metrics, I know that on most days, my Pi-hole lab deployment blocks an average of about 10% of the total domain resolutions, and Windows 10 telemetry subdomains are often the most blocked DNS requests (which is great because it is otherwise somewhere between extremely difficult and impossible to disable such Win10 telemetry). While such information is useful, we can ship these valuable DNS logs to a centralized location for log enrichment and monitoring purposes, including security analytics and threat hunting.
The raw dnsmasq logs are illustrated in the following snippet from pihole.log:
Feb 2 11:09:26 dnsmasq[19413]: query[A] ocos-office365-s2s.msedge.net from 192.168.2.48
Feb 2 11:09:26 dnsmasq[19413]: forwarded ocos-office365-s2s.msedge.net to 9.9.9.9
Feb 2 11:09:26 dnsmasq[19413]: query[A] config.edge.skype.com from 192.168.2.148
Feb 2 11:09:26 dnsmasq[19413]: forwarded config.edge.skype.com to 9.9.9.9
Feb 2 11:09:26 dnsmasq[19413]: query[A] client-office365-tas.msedge.net from 192.168.2.48
Feb 2 11:09:26 dnsmasq[19413]: forwarded client-office365-tas.msedge.net to 9.9.9.9
Feb 2 11:09:26 dnsmasq[19413]: reply ocos-office365-s2s.msedge.net is <CNAME>
Feb 2 11:09:26 dnsmasq[19413]: reply ocos-office365-s2s-msedge-net.e-0009.e-msedge.net is <CNAME>
Feb 2 11:09:26 dnsmasq[19413]: reply e-0009.e-msedge.net is 13.107.5.88
Feb 2 11:09:26 dnsmasq[19413]: reply config.edge.skype.com is <CNAME>
Feb 2 11:09:26 dnsmasq[19413]: reply s-0001.s-msedge.net is 13.107.3.128
Feb 2 11:09:26 dnsmasq[19413]: reply client-office365-tas.msedge.net is <CNAME>
Feb 2 11:09:26 dnsmasq[19413]: reply afdo-tas-offload.trafficmanager.net is <CNAME>
Feb 2 11:09:26 dnsmasq[19413]: reply vip5.afdorigin-prod-ln02.afdogw.com is 51.140.98.69
Feb 2 11:09:28 dnsmasq[19413]: query[PTR] 69.98.140.51.in-addr.arpa from 192.168.2.48
Feb 2 11:09:28 dnsmasq[19413]: forwarded 69.98.140.51.in-addr.arpa to 9.9.9.9
Feb 2 11:09:28 dnsmasq[19413]: reply 51.140.98.69 is NXDOMAIN
You can see from the sample logs that one of my lab machines is using IP 192.168.2.48, that I am using the relatively new Quad9 (9.9.9.9, get it?) as my upstream DNS provider in Pi-hole, and that there are multiple DNS requests that are probably related to Microsoft software.
These aren’t the prettiest types of logs I’ve ever seen -- essentially there are multiple lines for each type of DNS event -- but they get the job done and have a standardized syslog-style timestamp per line, which we’ll need for our log shipment pipeline to the ELK stack.
At a high level, this represents the log shipment pipeline I set out to prototype:
Endpoints (client DNS requests) > Pi-hole (DNS server/sinkhole) > Filebeat (log shipper) > Logstash (log shaper) > Elasticsearch (log storage and indexing backend) and Kibana (log analysis frontend)
Essentially, the endpoints use Pi-hole as their DNS server. Pi-hole logs dnsmasq events including domain resolutions and blocklist matches to a local log file. I opted to use Filebeat, one of Elastic’s lightweight log shippers, directly on the Pi-hole server to ship those dnsmasq logs in real-time to a Logstash server. I created some custom configs for Logstash in order to implement basic field mappings, implement an accurate timestamp, and enrich the logs by adding GeoIP location lookups for external IP addresses from resolved domains. Logstash then ships those processed logs to a separate Elasticsearch server for storage and indexing, with Kibana serving as the frontend on the same server for manual searches, visualizations, and dashboards.
As an aside, one reason I found this project interesting is because there seems to be plentiful Internet chatter on working with BIND and Microsoft DNS logs, but not nearly so much about dnsmasq logs. That said, although the DNS log pipeline described here is designed for Pi-Hole’s dnsmasq logs, it can be easily adapted for other types of DNS logs such as BIND and Microsoft.
Back to business. Let’s walk through each of the major parts of the DNS logs pipeline in more detail. This guide will not cover the installation and basic configuration of the ELK stack itself, as this is well documented elsewhere. For my testing, I installed Logstash on a Ubuntu 16.04.3 Server VM, and Elasticsearch and Kibana on a separate Ubuntu Server VM. My main advice for deploying ELK is to ensure you allocate plenty of RAM. Ensure that your Logstash, Elasticsearch, and Kibana servers are all operational and you know their static IPs before proceeding. For this project, I am using the 6.1.x versions of the ELK stack components.
Filebeat
First, we need to install Filebeat on the Pi-hole server. Note that while Pi-hole itself has minimal system requirements (typically runs fine with 1 core and 512 MB RAM), running Filebeat on the same server will generate some performance overhead. In my case, I erred on the side of caution, and allocated 2 cores and 2 GB RAM to the Pi-hole server to account for the FileBeat addition, but even that is likely overkill for a small deployment. CPU usage is miniscule and total RAM utilization is typically <10% on my Pi-hole server.
Typical load on my Pi-hole server, it’s not breaking a sweat.
Since I am using Ubuntu Server, I can manually wget and install a 64-bit DEB package, or follow Elastic’s instructions for installing from the official repo. The process would be the same for other Debian based distros.
Once Filebeat is installed, I need to customize its filebeat.yml config file to ship Pi-hole’s logs to my Logstash server. You can either use the default Filebeat prospector that includes the default /var/log/*.log location (all log files in that path), or specify /var/log/pihole.log to only ship Pi-hole’s dnsmasq logs.
Filebeat prospectors snippet from filebeat.yml
We also need to point Filebeat to the Logstash server’s IP. I’m sticking with Logstash’s default port 5044.
Replace hosts value with your Logstash server’s static IP.
Since I’m using Ubuntu 16.04.3 Server as the underlying OS for everything, the proper command to then start Filebeat manually is: sudo systemctl start filebeat. Filebeat will immediately start shipping the specific logs to Logstash. You can also configure Filebeat (as well as the ELK stack components) to start up automatically on boot.
Logstash
While Filebeat requires minimal configuration to get started, Logstash configuration is much more involved. For my DNS logs pipeline, I installed Logstash on a dedicated Ubuntu Server VM. I named my custom config file as dnsmasq.conf, and ended up writing my own grok pattern filters to match on interesting dnsmasq logs in order to properly process and enrich them.
First, we specify the Logstash input in our custom config file, which is simply listening on its default port 5044 for logs shipped from Filebeat:
input {
beats {
port => 5044
type => "logs"
}
}
Then we need to create a custom grok filter to match on the specific dnsmasq logs we are interested in. This has been the most time consuming part of this project, as there are multiple formats that dnsmasq logs take, and essentially a single DNS event gets broken into multiple lines. This is where I first learned about https://grokconstructor.appspot.com/, an extremely useful web-based tool to build and test grok regular expression (regex) patterns. Through trial and error, I got a few basic matches working for DNS query and reply logs. There is clearly still work to be done; for example, a blacklisted domain and the originating client IP are logged on separate lines by dnsmasq (they are effectively separate logs), so addressing that remains on my to do list.
filter {
grok {
patterns_dir => ["./patterns"]
match=> { "message" => ["^%{logdate:LOGDATE} dnsmasq\[[\d]+\]\: query\[[\w]+\] %{domain:DOMAIN} from %{clientip:CLIENTIP}", "^%{logdate:LOGDATE} dnsmasq\[[\d]+\]\: reply %{domain:DOMAIN} is %{ip:IP}", "^%{logdate:LOGDATE} dnsmasq\[[\d]+\]\: %{blocklist:BLOCKLIST} %{domain:DOMAIN} is %{ip:IP}"]
}
}
The above example grok patterns matches on 3 district types of dsnmasq logs, including initial DNS queries, replies, and blacklisted requests.
You can see in my filter that I also specify a “patterns_dir”. In order to use custom patterns (which I have named the same as their respective fields in ALL CAPS) in a grok match, you must list them in a patterns file located in the specified directory.
The contents of my custom patterns file, which I simply saved to \patterns\dnsmasq:
logdate [\w]{3}\s[\s\d]{2}\s\d\d\:\d\d\:\d\d
blocklist [\/\w\.]+
domain [\w\.\-]+
clientip \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}
ip \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}
Note that I have not finished writing and perfecting grok patterns for all possible dnsmasq log types and fields. There are a few types of dnsmasq logs that I still need to address, and I’m sure refinements are needed for the somewhat crude-but-effective patterns I did write, to account for things like odd characters in domain names. That said, with the 3 patterns I did write and test, those are sufficient to get started with our DNS log pipeline aimed at interesting log analysis in ELK.
One issue I quickly ran into during testing was that the @timestamp field did not match my LOGDATE field once the logs arrived in Elasticsearch for indexing. LOGDATE represents the original timestamp of the dnsmasq event, while the @timestamp added in Elasticsearch represents the time the log was successfully shipped into Elasticsearch, which typically lags slightly behind the LOGDATE. Fortunately, Logstash’s date filter plugin makes it easy to fix this as follows:
date {
match => [ "LOGDATE", "MMM dd HH:mm:ss", "MMM d HH:mm:ss" ]
}
Essentially the dnsmasq logs have 2 possible representations for their syslog-style timestamp field, which I have named LOGDATE, consisting of 2 digit days and single digit days preceded by an extra space. The date filter above normalizes this such that the @timestamp field will exactly match the original corresponding LOGDATE field and also append the current year.
With domain resolution lookups, you have their resulting IP addresses being logged by Pi-hole. Accordingly, we want to enrich our logs with GeoIP location data. Logstash’s geoip filter plugin has made this remarkably easy:
geoip {
source => "IP"
}
What does this accomplish? Whenever this filter identifies an IP address for a resolvable domain, it enriches the document with GeoIP location data by adding various fields (drawing from the included Maxmind Lite database) like so:
As seen in Kibana, GeoIP data added to a document based on its IP field.
With this GeoIP data, will be able to run searches and build Kibana visualizations such as maps based on where IPs are geolocated.
Finally, we need to configure Logstash to send these freshly shaped and enriched logs to the Elasticsearch server. In my sample Logstash config, it looks like this (be sure to specify your own IP and index naming convention preference):
output {
elasticsearch {
hosts => ["192.168.2.52:9200"]
index => "logstash-dnsmasq-%{+YYYY.MM.dd}"
}
}
When I’m debugging my config, I often find it useful to also enable screen output via Logstash’s stdout plugin.
output {
elasticsearch {
#protocol => "http"
hosts => ["192.168.2.52:9200"]
index => "logstash-dnsmasq-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
Once the config file is ready, run Logstash and specify that it load our config file:
sudo bin/logstash -f dnsmasq.conf
Don’t be discouraged if Logstash throws an error related to your config file; read the error message carefully and fix your config accordingly. An errant or missing brace character or other typo is usually to blame in my experience.
Once Logstash is running, you should see something like the following, indicating that it is successfully listening on its default port for logs:
Logstash running with its default listener on port 5044.
And if you enabled an stdout filter, processed logs will be output to the screen in real-time. This is often helpful for debugging problems with your grok filter or other parts of your overall log pipeline.
Logstash stdout filter confirms success! Logs are being shaped, enriched, and shipped as expected.
Before getting to this point, you should have Elasticsearch and Kibana installed and running on a separate server with plenty of RAM allocated. To ensure that our log pipeline is working properly from end to end, query Elasticsearch from the command line or web browser to list the relevant Logstash indices:
Logstash indices in Elasticsearch.
Once that is done, you can finish setting up your index in Kibana and start reviewing logs. In Kibana, go to Management > Index Patterns and finish creating a new index pattern corresponding to the index naming convention you configured in Logstash.
Create an index pattern for your Logstash logs.
Be sure to use the @timestamp field as the “Time Filter field name”, click “Create index pattern” and you are all set to start working with the logs in Kibana.
Kibana Discover, showing that our DNS log pipeline is working as expected. Logs have the appropriate field mappings and GeoIP fields per our Logstash config.
Conclusion
In my next post, I’ll share some sample Kibana searches, visualizations, and dashboards that make good use of our new and improved Pi-hole DNS logs for security monitoring and analytics. I’ll also share additional lessons learned and recommended next steps for this project. In the meantime, you can find my sample configs on GitHub, with the caveat that they should still be considered mostly in beta stage at this point.
Polito, Inc. offers a wide range of security consulting services including threat hunting, penetration testing, vulnerability assessments, incident response, digital forensics, and more. If your business or your clients have any cyber security needs, contact our experts and experience what Masterful Cyber Security is all about.
Phone: 571-969-7039
E-mail: info@politoinc.com
Website: politoinc.com