19 KiB
Create a simple Prometheus Exporter with Rust
- Objective
- Objective
- Create a new Project
- Project settings
- Read from
/proc/loadavg
- Use of
Result<T, E>
- Set some
clippy
settings - Use
anyhow
- Create an atomic for
f64
- What does the
#[derive(Default)]
attribute do? - Create a storage for the metrics
- Introduce logging
- Introduce logging
- Parse the
loadavg
line - Store the parsed values in our
Metrics
struct - Introduce
tokio
1 - Periodically poll the data from
/proc/loadavg
- Make
main
usetokio
- What is
#[tokio::main]
? - Serve the metrics data
- Generate the response string
- Create a
Router
to route and serve the http endpoint - Read from file and serve http asynchronously
- Early Conclusion
- The Finishing
clap
- Debian Package
- Cross compile
- Examples
- Footnotes
Objective
Objective
Abstract
Acquire, (process) and serve metrics.
What we are actually going to do
Parse /proc/loadavg
periodically and serve its latest values.
Example content of /proc/loadavg
0.13 0.14 0.09 1/273 3160
Create a new Project
New Cargo Project
To begin we create a new Cargo project.
$ cargo new loadavg-exporter
Project settings
Add meta data to Cargo.toml
.
Cargo.toml
[package]
name = "loadavg-exporter"
version = "0.1.0"
authors = ["finga <finga@onders.org>"]
edition = "2021"
description = "Prometheus exporter example to export the load average."
readme = "README.md"
license = "GPL-3.0-or-later"
repository = "https://git.onders.org/finga/loadavg-exporter.git"
keywords = ["prometheus", "metrics"]
categories = ["command-line-utilities"]
Read from /proc/loadavg
src/main.rs
: Print content of /proc/loadavg
use std::{
fs::File,
io::{BufRead, BufReader},
path::Path,
};
fn parse_loadavg<P>(filename: P)
where
P: AsRef<Path>,
{
let file = File::open(&filename).unwrap();
for line in BufReader::new(file).lines() {
println!("{}", line.unwrap());
}
}
fn main() {
parse_loadavg("/proc/loadavg")
}
Use of Result<T, E>
Set some clippy
settings
Create .cargo/config
[target.'cfg(feature = "cargo-clippy")']
rustflags = [
"-Dwarnings",
"-Dclippy::pedantic",
"-Dclippy::nursery",
"-Dclippy::cargo",
]
Use anyhow
Add dependency to Cargo.toml
[dependencies]
anyhow = "1"
Use anyhow
where suitable
+use anyhow::Result;
...
-fn parse_loadavg<P>(filename: P)
+fn parse_loadavg<P>(filename: P) -> Result<()>
...
- let file = File::open(&filename).unwrap();
+ let file = File::open(&filename)?;
for line in BufReader::new(file).lines() {
- println!("{}", line.unwrap());
+ println!("{}", line?);
}
+ Ok(())
}
...
Create an atomic for f64
Create an atomic f64
type
use std::sync::atomic::{AtomicU64, Ordering};
#[derive(Default)]
struct AtomicF64 {
storage: AtomicU64,
}
impl AtomicF64 {
fn store(&self, value: f64, ordering: Ordering) {
let as_u64 = value.to_bits();
self.storage.store(as_u64, ordering);
}
fn load(&self, ordering: Ordering) -> f64 {
let as_u64 = self.storage.load(ordering);
f64::from_bits(as_u64)
}
}
What does the #[derive(Default)]
attribute do?
This calls a procedural derive macro4 to generate AtomicF64::default()
.
What is generated?
#[automatically_derived]
#[allow(unused_qualifications)]
impl ::core::default::Default for AtomicF64 {
#[inline]
fn default() -> AtomicF64 {
AtomicF64 {
storage: ::core::default::Default::default(),
}
}
}
Create a storage for the metrics
Create a struct to keep the metrics
#[derive(Default)]
struct Metrics {
load_1: AtomicF64,
load_5: AtomicF64,
load_15: AtomicF64,
}
Introduce logging
How to configure logging from the outside?
- The
RUST_LOG
environment variable is the answer - Use
RUST_LOG=trace cargo r
to set the log level totrace
and run the project - Use
RUST_LOG=error,path::to::module=trace
to set the overall log level toerror
and the log level ofpath::to::module
totrace
Add new dependencies in Cargo.toml
[dependencies]
anyhow = "1"
+log = "0.4"
+env_logger = "0.9"
Introduce logging
Generate some log messages
use anyhow::Result;
+use log::{debug, info};
use std::{
fs::File,
io::{BufRead, BufReader},
...
fn parse_loadavg<P>(filename: P) -> Result<()>
where
- P: AsRef<Path>,
+ P: AsRef<Path> + std::fmt::Display,
{
+ debug!("Read load average from {}", filename);
let file = File::open(&filename)?;
...
fn main() -> Result<()> {
+ env_logger::init();
+
+ info!("{} started", env!("CARGO_PKG_NAME"));
parse_loadavg("/proc/loadavg")?;
Parse the loadavg
line
Parse the fields
-fn parse_loadavg<P>(filename: P) -> Result<()>
+fn parse_loadavg<P>(filename: P, metrics: Arc<Metrics>) -> Result<()>
...
let file = File::open(&filename)?;
- for line in BufReader::new(file).lines() {
- println!("{}", line?);
+ let mut data = String::new();
+
+ BufReader::new(file).read_to_string(&mut data)?;
+ let data = data.trim();
+ trace!("Data to parse: {}", data);
+ let fields: Vec<&str> = data.split(' ').collect();
+
+ if fields.len() != 5 {
+ bail!(
+ "Expected to read 5 space separated fields from {}",
+ filename
+ );
}
...
Store the parsed values in our Metrics
struct
Before returning we store the gathered data
fn parse_loadavg<P>(filename: P, metrics: Arc<Metrics>) -> Result<()>
where
P: AsRef<Path> + std::fmt::Display,
{
...
+ trace!("Parsed fields: {:?}", fields);
+
+ metrics
+ .load_1
+ .store(fields[0].parse::<f64>()?, Ordering::Relaxed);
+ metrics
+ .load_5
+ .store(fields[1].parse::<f64>()?, Ordering::Relaxed);
+ metrics
+ .load_15
+ .store(fields[2].parse::<f64>()?, Ordering::Relaxed);
+
Ok(())
}
Introduce tokio
1
Add tokio
to Cargo.toml
...
+tokio = { version = "1", features = ["macros", "time", "rt-multi-thread"] }
Add use
statements to bring utilities into scope
use anyhow::{bail, Result};
use log::{debug, info, trace};
use std::{
fs::File,
io::{BufReader, Read},
path::Path,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
},
+ time::Duration,
};
+use tokio::time::sleep;
Periodically poll the data from /proc/loadavg
Call the loadavg parser in an infinite loop and sleep
async fn poll_loadavg<P>(filename: P, interval: u64, metrics: Arc<Metrics>) -> Result<()>
where
P: AsRef<Path> + std::fmt::Display,
{
debug!("Reading {} every {} seconds", filename, interval);
loop {
trace!("Polling loadavg from {}", filename);
parse_loadavg(&filename, metrics.clone())?;
sleep(Duration::from_secs(interval)).await;
}
}
Make main
use tokio
And adopt our main()
to use it
-fn main() -> Result<()> {
+#[tokio::main]
+async fn main() -> Result<()> {
...
- parse_loadavg("/proc/loadavg", Arc::clone(&metrics))?;
+ poll_loadavg("/proc/loadavg", 5, Arc::clone(&metrics)).await?;
What is #[tokio::main]
?
It is syntactic sugar as following:
#[tokio::main]
async fn main() {
println!("hello");
}
… just generates this:
fn main() {
let mut rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
println!("hello");
})
}
Serve the metrics data
Add dependencies to Cargo.toml
…
[dependencies]
...
+axum = "0.5"
...
and import further necessities in our program…
use anyhow::{bail, Result};
+use axum::{routing::get, Extension, Router, Server};
use log::{debug, info, trace};
use std::{
fs::File,
io::{BufReader, Read},
+ net::{IpAddr, Ipv4Addr, SocketAddr},
path::Path,
...
};
-use tokio::time::sleep;
+use tokio::{spawn, time::sleep, try_join};
Generate the response string
Create the served metrics data
#[allow(clippy::unused_async)]
async fn serve_metrics(Extension(metrics): Extension<Arc<Metrics>>) -> String {
format!(
r"# HELP System load (1m).
# TYPE load_1 gauge
load_1 {}
# HELP System load (5m).
# TYPE load_5 gauge
load_5 {}
# HELP System load (15m).
# TYPE load_15 gauge
load_15 {}
",
metrics.load_1.load(Ordering::Relaxed),
metrics.load_5.load(Ordering::Relaxed),
metrics.load_15.load(Ordering::Relaxed),
)
}
Create a Router
to route and serve the http endpoint
Basically thats our http listener
async fn listen_http(address: IpAddr, port: u16, metrics: Arc<Metrics>) -> Result<()> {
let app = Router::new()
.route("/metrics", get(serve_metrics))
.layer(Extension(metrics));
let addr = SocketAddr::from((address, port));
debug!("Listening on {}:{}", address, port);
Ok(Server::bind(&addr).serve(app.into_make_service()).await?)
}
Read from file and serve http asynchronously
Poll loadavg, listen for http and return on error
#[tokio::main]
async fn main() -> Result<()> {
...
- poll_loadavg("/proc/loadavg", 5, Arc::clone(&metrics)).await?;
+
+ let (poller, listener) = try_join!(
+ spawn(poll_loadavg("/proc/loadavg", 5, Arc::clone(&metrics))),
+ spawn(listen_http(
+ IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)),
+ 8000,
+ Arc::clone(&metrics),
+ )),
+ )?;
+ poller?;
+ listener?;
Ok(())
}
Early Conclusion
This concludes the first step of creating a small prometheus exporter.
Note
We could have created a more simple solution without using tokio
and by parsing /proc/loadavg
on each request. Still, this shows
a bit of tokio
and further might not appliciable when slightly
more complex requirements are needed.
The Finishing
Although this example already works here are some additional steps to improve certain aspects.
Further improvements
- Parse command line arguments with
clap
: The Command Line Argument Parser - Use
cargo-deb
to generate a Debian Package - Cross compile the project for other architectures
clap
Parse command line arguments with clap
What is clap
- There are several ways of how
clap
can be used (derive, builder, etc..) - Documentation: https://docs.rs/clap/latest/clap/
- Examples: https://github.com/clap-rs/clap/tree/master/examples
Add build dependencies for clap
Add clap
dependencies to Cargo.toml
axum = "0.5"
+clap = { version = "3", features = ["derive"] }
Have a struct holding all arguments
Create a struct which holds all arguments
#[derive(Parser)]
#[clap(about, version, author)]
struct Args {
/// The path to the file to parse the load average parameters.
#[clap(short, long, default_value = "/proc/loadavg")]
file: String,
/// The intervall how often the file is queried
#[clap(short, long, default_value = "10")]
interval: u64,
/// The IPv4 or IPv6 address where the metrics are served.
#[clap(short, long, default_value = "127.0.0.1")]
address: IpAddr,
/// The port where the metrics are served.
#[clap(short, long, default_value = "9111")]
port: u16,
/// Produce verbose output, multiple -v options increase the verbosity
#[clap(short, long, global = true, parse(from_occurrences))]
verbose: u8,
}
Make use of the new arguments
Parse the arguments and set the log level accordingly
#[tokio::main]
async fn main() -> Result<()> {
- env_logger::init();
+ let cli = Cli::parse();
+
+ env_logger::Builder::from_env(env_logger::Env::default().default_filter_or(
+ match cli.verbose {
+ 0 => "error",
+ 1 => "warn",
+ 2 => "info",
+ 3 => "debug",
+ _ => "trace",
+ },
+ ))
+ .init();
Make use of the new arguments
Pass the argument values to the procedures
let (poller, listener) = try_join!(
- spawn(poll_loadavg("/proc/loadavg", 5, Arc::clone(&metrics))),
- spawn(listen_http(
- IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)),
- 8000,
- Arc::clone(&metrics),
- )),
+ spawn(poll_loadavg(cli.file, cli.interval, Arc::clone(&metrics))),
+ spawn(listen_http(cli.address, cli.port, Arc::clone(&metrics))),
)?;
poller?;
listener?;
Debian Package
Build a Debian Package with the help of cargo-deb
Create debian/service
[Unit]
Description=Load average Prometheus exporter
[Service]
Restart=always
EnvironmentFile=/etc/default/loadavg-exporter
ExecStart=/usr/bin/loadavg-exporter $ARGS
Type=simple
ProtectSystem=strict
PrivateDevices=yes
PrivateUsers=yes
RestrictNamespaces=yes
[Install]
WantedBy=multi-user.target
Build a Debian Package with the help of cargo-deb
Create debian/default
ARGS=""
# loadavg-exporter supports the following options:
# -a, --address <ADDRESS> The IPv4 or IPv6 address where the metrics are served
# [default: 127.0.0.1]
# -f, --file <FILE> The path to the file to parse the load average parameters
# [default: /proc/loadavg]
# -i, --interval <INTERVAL> The intervall how often the file is queried [default: 10]
# -p, --port <PORT> The port where the metrics are served [default: 9112]
# -v, --verbose Produce verbose output, multiple -v options increase the
# verbosity
Build a Debian Package with the help of cargo-deb
Add the package.metadata.deb
section to Cargo.toml
[package.metadata.deb]
extended-description = "Load average Prometheus exporter."
section = "utility"
maintainer-scripts = "debian/"
systemd-units = { enable = true }
assets = [
["target/release/loadavg-exporter", "usr/bin/", "755"],
["debian/default", "etc/default/loadavg-exporter", "644"],
]
Cross compile
Cross compile for aarch64/arm64
You need to have the right compiler dependencies installed, on Debian
that is: dummy
Add targets to .cargo/config
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
Examples
Example 1: MightyOhm Geiger Counter
Abstract
A sensor connected to a system to log the sensor data.
Example 1: Sensor
Reading Data from the Sensor
The Geiger Counter's serial port is connected to the UART port of the Pi. So our program need to listen on the serial line to receive the Data
Example 1: Prometheus
Prometheus Exporter
In order to pipe the Data to Prometheus we will open a port to listen for incomming http requests to respond with the sensors data.