
A Guide on deploying a TIG Stack using Containers.
Now that you have ESP32 gedding MQTT Data how do you visualize it?
Meshtastic can be used as a simple Mesh between devices but it also can be used for tracking devices through a Grafana Dashboard. The trick is that Grafana doesn’t work with MQTT very well. This can cause an issue for a number of reasons, the first is that MQTT is not designed for long term storage of messages. Its kind of a database but not a very good one. It great for receiving a lot of small messages into defined topics but how do you visualize and also retain those messages over time. That would be the TIG Stack. I am not going into detail on how to deploy Docker or MQTT but following these instructions you can get a basic TIG Stack configured. TIG stands for Telegraf, InfluxDB and Grafana. The components needed to get data out of MQTT to visualize it.
- Ensure MQTT, Docker, and Docker Compose are installed and functional. There are many instructions on how to do this. Portainer is good to install as well if you aren’t familiar with how to run docker from the command line.
- I am basing this on Ubunto Linux but I would imagine it would work on other operating systems as long as you meet all the requirements.
- Create a folder with the correct permissions for your docker user at the root of your file system or if you just want to test you can also just do a chmod -r 777 /directoryname in my case I use /containers/tigstack. I do this for all of my containers so that I have a central place to manage them.
- Create a docker-compose.yaml with the following script inside of it, modifying it as needed for your use case.
version: '3.8'
services:
influxdb:
image: influxdb:latest
container_name: influxdb
ports:
- "8086:8086"
volumes:
- influxdb_data:/var/lib/influxdb2
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=admin
- DOCKER_INFLUXDB_INIT_PASSWORD=SecureLogonPassword # Replace with a secure password.
- DOCKER_INFLUXDB_INIT_ORG=YourOrganization # Replace with your organization name.
- DOCKER_INFLUXDB_INIT_BUCKET=meshtastic_data # You can change this if you are using a different naming convention.
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=SecureInfluxPassword # Replace with a secure password.
telegraf:
image: telegraf:latest
container_name: telegraf
depends_on:
- influxdb
volumes:
- ./:/etc/telegraf:ro
environment:
- MQTT_BROKER_URL=tcp://10.10.10.10:1883 # Replace this with the IP to your MQTT Server.
- MQTT_USER=meshtastic # Set your MQTT username here
- MQTT_PASS=SecureMQTTPassword # Set your MQTT password here
- INFLUX_TOKEN=InfluxDBAdminToken # Replace with your DOCKER_INFLUXDB_INIT_ADMIN_TOKEN
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3000:3000"
depends_on:
- influxdb
volumes:
- grafana_data:/var/lib/grafana
volumes:
influxdb_data:
grafana_data:
You will now want to create a telegraf.conf in the same directory as the docker-compose.yaml with the following information.
[agent]
interval = "10s"
[[outputs.influxdb_v2]] # Double brackets required
urls = ["http://influxdb:8086"]
token = "${INFLUX_TOKEN}"
organization = "Organization" # Replace with the Organization your set in the docker-compose.yaml
bucket = "meshtastic_data" # This can stay as long as you didn't change it in the docker-compose.yaml
[[inputs.mqtt_consumer]] # Double brackets required
servers = ["${MQTT_BROKER_URL}"]
topics = [
"msh/Meshtastic/2/json/Main/!9e75dbcc", # The topics here are the ones you want to listen on.
"msh/Meshtastic/2/json/Private/!9e75dbcc" # They can only be topics that are sending in JSON format.
]
username = "${MQTT_USER}"
password = "${MQTT_PASS}"
## Unique identifier for this Telegraf client
client_id = "telegraf_meshtastic"
## Data format of the incoming messages (e.g., "json", "value", "influx")
data_format = "json"
json_string_fields = ["payload_text"]
- Inside the container folder run docker compose up -d, this will build and start the containers as a group.
- If everything was configured correctly you will have a functional TIG Stack, now off to configuring Grafana to connect to InfluxDB to pull the data.
- You will first need a token from InfluxDB and that can be created by doing the following:
- Open http://IPAddressofHost:8086 and logging in with admin and the password you created in the docker-compose.yaml for the InfluxDB Init password.
- Click Load Data and then click API Tokens.
- Click Generate API token, I use the All Access API Token because my TIG Stack is isolated.
- Give it a name, Grafana for instance and click save, it will automatically create the token. Save this out to a txt document, you will need it in the next step, and it will show only once.
- Configuring Grafana InfluxDB connection.
- Open http://IPAddressofHost:3000
- You will logon to Grafana the first time as admin with the password admin, it will then prompt you to change the password.
- Once in Grafana. click the Grafana icon in the upper left and click Connections.
- Click Add new connection and search for InfluxDB, Click it and the Click Add new data source in the upper right.
- Entries to change:
- Name: if you’d like something different that influxdb, you can keep it the same if you’d like.
- Query Language: flux
- URL: http://containeripofinfluxdb:8086
- Organization: Must match what is in the docker-compose.yaml
- Token: The API key generated in InfluxDB
- Default Bucket: Must match what is the docker-compose.yaml
- Click Save & Test
- If everything was configured correctly you should now have a working connection to InfluxDB inside of Grafana and Telegraf will work as the data pull from MQTT.
- Now for the fun part, configuring dashboards inside of Grafana. I am not going to go into detail but I will give you a few queries that you can use to pull JSON data into tables. If you are using geo location I have queries for that as well based on the Meshtastic JSON format from an ESP32. The NRF based modules send it in an encrypted format to MQTT and you have to use other ways to get that to work since they are somewhat encrypted.
This query will pull messages from all channels and display them based on the time and date range you select in the dashboard. Where this comes in handy is that it will let you see who sent it, the message contents and the time it was sent. You can update the r.node_id or add additional ones to give more human readable name to the device ID.
import "regexp"
from(bucket: "meshtastic_data")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "mqtt_consumer")
// 1. Pull message text AND the sender ID fields
|> filter(fn: (r) => r._field == "payload_text" or r._field == "from" or r._field == "payload_from")
// 2. Pivot so message and node ID are in the same row
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
// 3. Identify the Raw ID (Logic from your reference)
|> map(fn: (r) => ({
r with
node_id: if exists r.from then string(v: r.from)
else if exists r.payload_from then string(v: r.payload_from)
else "unknown"
}))
// 4. Map IDs to Names
|> map(fn: (r) => ({
r with
node_name: if r.node_id == "2658588132" then "Base"
else if r.node_id == "1685377223" then "Node1"
else if r.node_id == "9505466405" then "Node2"
else r.node_id
}))
// 5. Extract "Main" from the 'topic' tag and create 'channel'
|> map(fn: (r) => {
p = regexp.replaceAllString(r: /^msh/.*/json//, t: "", v: r.topic)
finalVal = regexp.replaceAllString(r: //![^/]*$/, t: "", v: p)
return { r with channel: finalVal }
})
// 6. Filter out rows without messages
|> filter(fn: (r) => exists r.payload_text and r.payload_text != "")
// 7. Final column selection and ordering
// Note: 'keep' maintains the order specified in the array
|> keep(columns: ["_time", "node_name", "channel", "payload_text"])
|> rename(columns: {payload_text: "message", node_name: "node_id"})
|> group()
|> yield(name: "final_table")
This query allows you to see the last location of any Node you are tracking for the last 30 days. You can adjust that range if you’d like but this is a more accurate way of seeing where a node is located on the map.
from(bucket: "meshtastic_data")
|> range(start: -30d)
|> filter(fn: (r) => r._measurement == "mqtt_consumer")
|> filter(fn: (r) => r._field == "payload_latitude_i" or r._field == "payload_longitude_i" or r._field == "from" or r._field == "payload_from")
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
// 1. Identify the Raw ID
|> map(fn: (r) => ({
r with
node_id: if exists r.from then string(v: r.from)
else if exists r.payload_from then string(v: r.payload_from)
else "unknown"
}))
// 2. THE TRANSLATION STEP: Map IDs to Names
|> map(fn: (r) => ({
r with
node_name: if r.node_id == "2658588132" then "Base"
else if r.node_id == "1685377223" then "Node1"
else if r.node_id == "9505466705" then "Node2"
else r.node_id // Fallback to ID if no match found
}))
|> group(columns: ["node_name"]) // Group by the name instead of ID
|> fill(column: "payload_latitude_i", usePrevious: true)
|> fill(column: "payload_longitude_i", usePrevious: true)
|> top(n: 1, columns: ["_time"])
// 3. Final math and cleanup
|> map(fn: (r) => ({
r with
lat: float(v: r.payload_latitude_i) / 10000000.0,
lon: float(v: r.payload_longitude_i) / 10000000.0
}))
|> group()
|> filter(fn: (r) => r.lat != 0.0)
|> keep(columns: ["node_name", "lat", "lon"])
This query will allow you to filter based on the time ranges for all GPS coordinates in the InfluxDB, where its useful is that it allows you to see where a device went and what time based on the reporting data. You can also add your node id’s to the translation step to give more human readable display.
from(bucket: "meshtastic_data")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "mqtt_consumer")
|> filter(fn: (r) => r._field == "payload_latitude_i" or r._field == "payload_longitude_i" or r._field == "from" or r._field == "payload_from")
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
// 1. Identify the Raw ID
|> map(fn: (r) => ({
r with
node_id: if exists r.from then string(v: r.from)
else if exists r.payload_from then string(v: r.payload_from)
else "unknown"
}))
// 2. THE TRANSLATION STEP: Map IDs to Names
|> map(fn: (r) => ({
r with
node_name: if r.node_id == "2658599132" then "Base"
else if r.node_id == "168588223" then "Node1"
else if r.node_id == "950770705" then "Node2"
else r.node_id // Fallback to ID if no match found
}))
|> group(columns: ["node_name"]) // Group by the name instead of ID
|> fill(column: "payload_latitude_i", usePrevious: true)
|> fill(column: "payload_longitude_i", usePrevious: true)
// |> top(n: 1, columns: ["_time"])
// 3. Final math and cleanup
|> map(fn: (r) => ({
r with
lat: float(v: r.payload_latitude_i) / 10000000.0,
lon: float(v: r.payload_longitude_i) / 10000000.0
}))
|> group()
|> filter(fn: (r) => r.lat != 0.0)
|> keep(columns: ["_time","node_name", "lat", "lon"])
This obviously isn’t a fully comprehensive solution but more of a guide to get you up and going. You will still need to configure your Meshtastic Devices, MQTT and make sure that works first. Then there are subscriptions and user accounts that you will have to use to get going. ESP32 can send in both JSON and the encrypted version of the MQTT send, you can use Node Red to do the conversion if you’d like and then send it back to MQTT but its a bit of a pain and fairly involved. NRF based devices don’t have WiFi and don’t support JSON to MQTT but have the benefit of being extremely low powered. In my use case I have a ESP32 that is configured as a Client and my NRF SenseCaps are used as Trackers. The ESP32 is connected via WiFi to HomeAssistant and the MQTT server there. It works well and allows me to do triggers based on messages to a specific IOT channel. The bottom line is you can go as far as you want to with this. You can go simple or you can use it as a more production ready solution. Lora is pretty cool for OffGrid communications and being able to visualize instead of just receiving messages to nodes opens some neat possibilities.
