9 Commits

Author SHA1 Message Date
Ara Sadoyan
dd069b8532 minor fix 2025-09-17 16:51:57 +02:00
Ara Sadoyan
c78245e695 disable HC for upstream. 2025-09-16 12:54:23 +02:00
Ara Sadoyan
66b1a1c399 upstreams pathconfig fix 2025-09-15 15:22:21 +02:00
Ara Sadoyan
bba6dd8514 minor cleanup 2025-09-09 14:51:37 +02:00
Ara Sadoyan
79485ac69d minor cleanup 2025-09-04 18:16:09 +02:00
Ara Sadoyan
61c5625016 A coffee :-) 2025-09-02 14:57:47 +02:00
Ara Sadoyan
57bdc71acd A coffee :-) 2025-09-02 14:56:36 +02:00
Ara Sadoyan
9e09b829a6 README update 2025-09-01 17:02:57 +02:00
Ara Sadoyan
d3602fa578 Added Kubernetes API support, fo ingress controller. 2025-09-01 16:32:30 +02:00
19 changed files with 514 additions and 269 deletions

2
.gitignore vendored
View File

@@ -5,6 +5,8 @@
*.dll
*.exe
*.sh
/docs/
/docs
/target/
*.iml
.idea/

View File

@@ -48,6 +48,7 @@ lazy_static = "1.5.0"
#openssl = "0.10.73"
x509-parser = "0.17.0"
rustls-pemfile = "2.2.0"
#hickory-client = { version = "0.25.2" }
tower-http = { version = "0.6.6", features = ["fs"] }
once_cell = "1.20.2"
#moka = { version = "0.12.10", features = ["sync"] }

View File

@@ -1,120 +0,0 @@
# 📈 Aralez Prometheus Metrics Reference
This document outlines Prometheus metrics for the [Aralez](https://github.com/sadoyan/aralez) reverse proxy.
These metrics can be used for monitoring, alerting and performance analysis.
Exposed to `http://config_address/metrics`
By default `http://127.0.0.1:3000/metrics`
# 📊 Example Grafana dashboard during stress test :
![Aralez](https://netangels.net/utils/dash.png)
---
## 🛠️ Prometheus Metrics
### 1. `aralez_requests_total`
- **Type**: `Counter`
- **Purpose**: Total amount requests served by Aralez.
**PromQL example:**
```promql
rate(aralez_requests_total[5m])
```
---
### 2. `aralez_errors_total`
- **Type**: `Counter`
- **Purpose**: Count of requests that resulted in an error.
**PromQL example:**
```promql
rate(aralez_errors_total[5m])
```
---
### 3. `aralez_responses_total{status="200"}`
- **Type**: `CounterVec`
- **Purpose**: Count of responses by HTTP status code.
**PromQL example:**
```promql
rate(aralez_responses_total{status=~"5.."}[5m]) > 0
```
> Useful for alerting on 5xx errors.
---
### 4. `aralez_response_latency_seconds`
- **Type**: `Histogram`
- **Purpose**: Tracks the latency of responses in seconds.
**Example bucket output:**
```prometheus
aralez_response_latency_seconds_bucket{le="0.01"} 15
aralez_response_latency_seconds_bucket{le="0.1"} 120
aralez_response_latency_seconds_bucket{le="0.25"} 245
aralez_response_latency_seconds_bucket{le="0.5"} 500
...
aralez_response_latency_seconds_count 1023
aralez_response_latency_seconds_sum 42.6
```
| Metric | Meaning |
|-------------------------|---------------------------------------------------------------|
| `bucket{le="0.1"} 120` | 120 requests were ≤ 100ms |
| `bucket{le="0.25"} 245` | 245 requests were ≤ 250ms |
| `count` | Total number of observations (i.e., total responses measured) |
| `sum` | Total time of all responses, in seconds |
### 🔍 How to interpret:
- `le` means “less than or equal to”.
- `count` is total amount of observations.
- `sum` is the total time (in seconds) of all responses.
**PromQL examples:**
🔹 **95th percentile latency**
```promql
histogram_quantile(0.95, rate(aralez_response_latency_seconds_bucket[5m]))
```
🔹 **Average latency**
```promql
rate(aralez_response_latency_seconds_sum[5m]) / rate(aralez_response_latency_seconds_count[5m])
```
---
## ✅ Notes
- Metrics are registered after the first served request.
---
✅ Summary of key metrics
| Metric Name | Type | What it Tells You |
|---------------------------------------|------------|---------------------------|
| `aralez_requests_total` | Counter | Total requests served |
| `aralez_errors_total` | Counter | Number of failed requests |
| `aralez_responses_total{status="200"}` | CounterVec | Response status breakdown |
| `aralez_response_latency_seconds` | Histogram | How fast responses are |
📘 *Last updated: May 2025*

View File

@@ -1,12 +1,18 @@
![Aralez](https://netangels.net/utils/aralez-white.jpg)
# Aralez (Արալեզ), Reverse proxy and service mesh built on top of Cloudflare's Pingora
---
# Aralez (Արալեզ),
### **Reverse proxy and service mesh built on top of Cloudflare's Pingora**
What Aralez means ?
**Aralez = Արալեզ** <ins>.Named after the legendary Armenian guardian spirit, winged dog-like creature, that descend upon fallen heroes to lick their wounds and resurrect them.</ins>.
Built on Rust, on top of **Cloudflares Pingora engine**, **Aralez** delivers world-class performance, security and scalability — right out of the box.
[![Buy Me A Coffee](https://img.shields.io/badge/☕-Buy%20me%20a%20coffee-orange)](https://www.buymeacoffee.com/sadoyan)
---
## 🔧 Key Features
@@ -112,12 +118,23 @@ Make the binary executable `chmod 755 ./aralez-VERSION` and run.
File names:
| File Name | Description |
|---------------------------|---------------------------------------------------------------|
| `aralez-x86_64-musl.gz` | Static Linux x86_64 binary, without any system dependency |
| `aralez-x86_64-glibc.gz` | Dynamic Linux x86_64 binary, with minimal system dependencies |
| `aralez-aarch64-musl.gz` | Static Linux ARM64 binary, without any system dependency |
| `aralez-aarch64-glibc.gz` | Dynamic Linux ARM64 binary, with minimal system dependencies |
| File Name | Description |
|---------------------------|--------------------------------------------------------------------------|
| `aralez-x86_64-musl.gz` | Static Linux x86_64 binary, without any system dependency |
| `aralez-x86_64-glibc.gz` | Dynamic Linux x86_64 binary, with minimal system dependencies |
| `aralez-aarch64-musl.gz` | Static Linux ARM64 binary, without any system dependency |
| `aralez-aarch64-glibc.gz` | Dynamic Linux ARM64 binary, with minimal system dependencies |
| `sadoyan/aralez` | Docker image on Debian 13 slim (https://hub.docker.com/r/sadoyan/aralez) |
**Via docker**
```shell
docker run -d \
-v /local/path/to/config:/etc/aralez:ro \
-p 80:80 \
-p 443:443 \
sadoyan/aralez
```
## 💡 Note
@@ -195,6 +212,10 @@ myhost.mydomain.com:
servers:
- "127.0.0.4:8443"
- "127.0.0.5:8443"
"/.well-known/acme-challenge":
healthcheck: false
servers:
- "127.0.0.1:8001"
```
**This means:**
@@ -209,6 +230,7 @@ myhost.mydomain.com:
- Requests to `myhost.mydomain.com/` will be proxied to `127.0.0.1` and `127.0.0.2`.
- Plain HTTP to `myhost.mydomain.com/foo` will get 301 redirect to configured TLS port of Aralez.
- Requests to `myhost.mydomain.com/foo` will be proxied to `127.0.0.4` and `127.0.0.5`.
- Requests to `myhost.mydomain.com/.well-known/acme-challenge` will be proxied to `127.0.0.1:8001`, but healthcheks are disabled.
- SSL/TLS for upstreams is detected automatically, no need to set any config parameter.
- Assuming the `127.0.0.5:8443` is SSL protected. The inner traffic will use TLS.
- Self-signed certificates are silently accepted.

View File

@@ -14,7 +14,7 @@ config_tls_certificate: /etc/server.crt # Mandatory if config_tls_address is set
config_tls_key_file: /etc/key.pem # Mandatory if config_tls_address is set
proxy_address_http: 0.0.0.0:6193 # Proxy HTTP bind address
proxy_address_tls: 0.0.0.0:6194 # Optional, Proxy TLS bind address
proxy_certificates: /etc/yoyo # Mandatory if proxy_address_tls set, should contain a certificate and key files strictly in a format {NAME}.crt, {NAME}.key.
proxy_certificates: /etc/certs # Mandatory if proxy_address_tls set, should contain a certificate and key files strictly in a format {NAME}.crt, {NAME}.key.
proxy_tls_grade: a+ # Grade of TLS suite for proxy (a+, a, b, c, unsafe), matching grades of Qualys SSL Labs
upstreams_conf: /etc/upstreams.yaml # the location of upstreams file
file_server_folder: /opt/storage # Optional, local folder to serve
@@ -22,4 +22,4 @@ file_server_address: 127.0.0.1:3002 # Optional, Local address for file server. C
log_level: info # info, warn, error, debug, trace, off
hc_method: HEAD # Healthcheck method (HEAD, GET, POST are supported) UPPERCASE
hc_interval: 2 #Interval for health checks in seconds
master_key: 910517d9-f9a1-48de-8826-dbadacbd84af-cb6f830e-ab16-47ec-9d8f-0090de732774 # Mater key for working with API server and JWT Secret
master_key: 910517d9-f9a1-48de-8826-dbadacbd84af-cb6f830e-ab16-47ec-9d8f-0090de732774 # Mater key for working with API server and JWT Secret

View File

@@ -1,5 +1,5 @@
# The file under watch and hot reload, changes are applied immediately, no need to restart or reload.
provider: "file" # consul
provider: "file" # consul, kubernetes
sticky_sessions: false
to_ssl: false
#rate_limit: 100
@@ -24,6 +24,17 @@ consul: # If the provider is consul. Otherwise, ignored.
- proxy: "proxy-frontend-dev-frontend-srv"
real: "frontend-dev-frontend-srv"
token: "8e2db809-845b-45e1-8b47-2c8356a09da0-a4370955-18c2-4d6e-a8f8-ffcc0b47be81" # Consul server access token, If Consul auth is enabled
kubernetes:
servers:
- "172.16.0.11:5443" # KUBERNETES_SERVICE_HOST : KUBERNETES_SERVICE_PORT_HTTPS
services:
- proxy: "vt-api-service-v2"
real: "vt-api-service-v2"
- proxy: "vt-search-service"
real: "vt-search-service"
- proxy: "vt-websocket-service"
real: "vt-websocket-service"
tokenpath: "/tmp/token.txt" # /var/run/secrets/kubernetes.io/serviceaccount/token
upstreams:
myip.mydomain.com:
paths:
@@ -54,9 +65,11 @@ upstreams:
headers:
- "X-Some-Thing:Yaaaaaaaaaaaaaaa"
servers:
- "192.168.1.1:8000"
- "192.168.1.10:8000"
- "127.0.0.1:8000"
- "127.0.0.2:8000"
- "127.0.0.3:8000"
- "127.0.0.4:8000"
- "127.0.0.4:8000"
"/.well-known/acme-challenge":
healthcheck: false
servers:
- "127.0.0.1:8001"

View File

@@ -1,9 +1,11 @@
pub mod auth;
pub mod consul;
pub mod discovery;
pub mod dnsclient;
mod filewatch;
pub mod healthcheck;
pub mod jwt;
pub mod kuber;
pub mod metrics;
pub mod parceyaml;
pub mod structs;

View File

@@ -1,6 +1,5 @@
use crate::utils::parceyaml::load_configuration;
use crate::utils::structs::{Configuration, InnerMap, ServiceMapping, UpstreamsDashMap};
use crate::utils::tools::{clone_dashmap_into, compare_dashmaps};
use crate::utils::tools::{clone_dashmap_into, compare_dashmaps, print_upstreams};
use dashmap::DashMap;
use futures::channel::mpsc::Sender;
use futures::SinkExt;
@@ -11,6 +10,7 @@ use reqwest::header::{HeaderMap, HeaderValue};
use serde::Deserialize;
use std::collections::HashMap;
use std::sync::atomic::AtomicUsize;
use std::sync::Arc;
use std::time::Duration;
#[derive(Debug, Deserialize)]
@@ -27,59 +27,53 @@ struct TaggedAddress {
port: u16,
}
pub async fn start(fp: String, mut toreturn: Sender<Configuration>) {
let config = load_configuration(fp.as_str(), "filepath").await;
pub async fn start(mut toreturn: Sender<Configuration>, config: Arc<Configuration>) {
let headers = DashMap::new();
match config {
Some(config) => {
if config.typecfg.to_string() != "consul" {
info!("Not running Consul discovery, requested type is: {}", config.typecfg);
return;
}
info!("Consul Discovery is enabled : {}", config.typecfg);
let consul = config.consul.clone();
let prev_upstreams = UpstreamsDashMap::new();
match consul {
Some(consul) => {
let servers = consul.servers.unwrap();
info!("Consul Servers => {:?}", servers);
let end = servers.len() - 1;
info!("Consul Discovery is enabled : {}", config.typecfg);
let consul = config.consul.clone();
let prev_upstreams = UpstreamsDashMap::new();
match consul {
Some(consul) => {
let servers = consul.servers.unwrap();
info!("Consul Servers => {:?}", servers);
let end = servers.len();
loop {
let num = rand::rng().random_range(1..end);
headers.clear();
for (k, v) in config.headers.clone() {
headers.insert(k.to_string(), v);
}
let consul_data = servers.get(num).unwrap().to_string();
let upstreams = consul_request(consul_data, consul.services.clone(), consul.token.clone());
match upstreams.await {
Some(upstreams) => {
if !compare_dashmaps(&upstreams, &prev_upstreams) {
let mut tosend: Configuration = Configuration {
upstreams: Default::default(),
headers: Default::default(),
consul: None,
typecfg: "".to_string(),
extraparams: config.extraparams.clone(),
};
clone_dashmap_into(&upstreams, &prev_upstreams);
clone_dashmap_into(&upstreams, &tosend.upstreams);
tosend.headers = headers.clone();
tosend.extraparams.authentication = config.extraparams.authentication.clone();
tosend.typecfg = config.typecfg.clone();
tosend.consul = config.consul.clone();
toreturn.send(tosend).await.unwrap();
}
}
None => {}
}
sleep(Duration::from_secs(5)).await;
}
loop {
let mut num = 0;
if end > 0 {
num = rand::rng().random_range(0..end);
}
None => {}
headers.clear();
for (k, v) in config.headers.clone() {
headers.insert(k.to_string(), v);
}
let consul_data = servers.get(num).unwrap().to_string();
let upstreams = consul_request(consul_data, consul.services.clone(), consul.token.clone());
match upstreams.await {
Some(upstreams) => {
if !compare_dashmaps(&upstreams, &prev_upstreams) {
let mut tosend: Configuration = Configuration {
upstreams: Default::default(),
headers: Default::default(),
consul: None,
kubernetes: None,
typecfg: "".to_string(),
extraparams: config.extraparams.clone(),
};
clone_dashmap_into(&upstreams, &prev_upstreams);
clone_dashmap_into(&upstreams, &tosend.upstreams);
tosend.headers = headers.clone();
tosend.extraparams.authentication = config.extraparams.authentication.clone();
tosend.typecfg = config.typecfg.clone();
tosend.consul = config.consul.clone();
print_upstreams(&tosend.upstreams);
toreturn.send(tosend).await.unwrap();
}
}
None => {}
}
sleep(Duration::from_secs(5)).await;
}
}
None => {}
@@ -134,6 +128,7 @@ async fn get_by_http(url: String, token: Option<String>) -> Option<DashMap<Strin
is_http2: false,
to_https: false,
rate_limit: None,
healthcheck: None,
};
values.push(to_add);
}

View File

@@ -1,13 +1,11 @@
use crate::utils::consul;
use crate::utils::filewatch;
use crate::utils::structs::Configuration;
use crate::utils::{consul, kuber};
use crate::web::webserver;
use async_trait::async_trait;
use futures::channel::mpsc::Sender;
use std::sync::Arc;
pub struct FromFileProvider {
pub path: String,
}
pub struct APIUpstreamProvider {
pub config_api_enabled: bool,
pub address: String,
@@ -19,15 +17,6 @@ pub struct APIUpstreamProvider {
pub file_server_folder: Option<String>,
}
pub struct ConsulProvider {
pub path: String,
}
#[async_trait]
pub trait Discovery {
async fn start(&self, tx: Sender<Configuration>);
}
#[async_trait]
impl Discovery for APIUpstreamProvider {
async fn start(&self, toreturn: Sender<Configuration>) {
@@ -35,6 +24,23 @@ impl Discovery for APIUpstreamProvider {
}
}
pub struct FromFileProvider {
pub path: String,
}
pub struct ConsulProvider {
pub config: Arc<Configuration>,
}
pub struct KubernetesProvider {
pub config: Arc<Configuration>,
}
#[async_trait]
pub trait Discovery {
async fn start(&self, tx: Sender<Configuration>);
}
#[async_trait]
impl Discovery for FromFileProvider {
async fn start(&self, tx: Sender<Configuration>) {
@@ -45,6 +51,13 @@ impl Discovery for FromFileProvider {
#[async_trait]
impl Discovery for ConsulProvider {
async fn start(&self, tx: Sender<Configuration>) {
tokio::spawn(consul::start(self.path.clone(), tx.clone()));
tokio::spawn(consul::start(tx.clone(), self.config.clone()));
}
}
#[async_trait]
impl Discovery for KubernetesProvider {
async fn start(&self, tx: Sender<Configuration>) {
tokio::spawn(kuber::start(tx.clone(), self.config.clone()));
}
}

158
src/utils/dnsclient.rs Normal file
View File

@@ -0,0 +1,158 @@
/*
use crate::utils::structs::InnerMap;
use dashmap::DashMap;
use hickory_client::client::{Client, ClientHandle};
use hickory_client::proto::rr::{DNSClass, Name, RecordType};
use hickory_client::proto::runtime::TokioRuntimeProvider;
use hickory_client::proto::udp::UdpClientStream;
use std::net::SocketAddr;
use std::str::FromStr;
use std::sync::atomic::AtomicUsize;
use std::time::Duration;
use tokio::sync::Mutex;
type DnsError = Box<dyn std::error::Error + Send + Sync + 'static>;
pub struct DnsClientPool {
clients: Vec<Mutex<DnsClient>>,
}
struct DnsClient {
client: Client,
}
pub async fn start2(mut toreturn: Sender<Configuration>, config: Arc<Configuration>) {
let k8s = config.kubernetes.clone();
match k8s {
Some(k8s) => {
let dnserver = k8s.servers.unwrap_or(vec!["127.0.0.1:53".to_string()]);
let headers = DashMap::new();
let end = dnserver.len() - 1;
let mut num = 0;
if end > 0 {
num = rand::rng().random_range(0..end);
}
let srv = dnserver.get(num).unwrap().to_string();
let pool = DnsClientPool::new(5, srv.clone()).await;
let u = UpstreamsDashMap::new();
if let Some(whitelist) = k8s.services {
loop {
let upstreams = UpstreamsDashMap::new();
for service in whitelist.iter() {
let ret = pool.query_srv(service.real.as_str(), srv.clone()).await;
match ret {
Ok(r) => {
upstreams.insert(service.proxy.clone(), r);
}
Err(e) => eprintln!("DNS query failed for {:?}: {:?}", service, e),
}
}
if !compare_dashmaps(&u, &upstreams) {
headers.clear();
for (k, v) in config.headers.clone() {
headers.insert(k.to_string(), v);
}
let mut tosend: Configuration = Configuration {
upstreams: Default::default(),
headers: Default::default(),
consul: None,
kubernetes: None,
typecfg: "".to_string(),
extraparams: config.extraparams.clone(),
};
clone_dashmap_into(&upstreams, &u);
clone_dashmap_into(&upstreams, &tosend.upstreams);
tosend.headers = headers.clone();
tosend.extraparams.authentication = config.extraparams.authentication.clone();
tosend.typecfg = config.typecfg.clone();
tosend.consul = config.consul.clone();
print_upstreams(&tosend.upstreams);
toreturn.send(tosend).await.unwrap();
}
tokio::time::sleep(Duration::from_secs(1)).await;
}
}
}
None => {}
}
}
impl DnsClient {
pub async fn new(server: String) -> Result<Self, DnsError> {
let server_details = server;
let server: SocketAddr = server_details.parse().expect("Unable to parse socket address");
let conn = UdpClientStream::builder(server, TokioRuntimeProvider::default()).build();
let (client, bg) = Client::connect(conn).await.unwrap();
tokio::spawn(bg);
Ok(Self { client })
}
pub async fn query_srv(&mut self, name: &str) -> Result<DashMap<String, (Vec<InnerMap>, AtomicUsize)>, DnsError> {
let upstreams: DashMap<String, (Vec<InnerMap>, AtomicUsize)> = DashMap::new();
let mut values = Vec::new();
match tokio::time::timeout(Duration::from_secs(5), self.client.query(Name::from_str(name)?, DNSClass::IN, RecordType::SRV)).await {
Ok(Ok(response)) => {
for answer in response.answers() {
if let hickory_client::proto::rr::RData::SRV(srv) = answer.data() {
let to_add = InnerMap {
address: srv.target().to_string(),
port: srv.port(),
is_ssl: false,
is_http2: false,
to_https: false,
rate_limit: None,
};
values.push(to_add);
}
}
upstreams.insert("/".to_string(), (values, AtomicUsize::new(0)));
Ok(upstreams)
}
Ok(Err(e)) => Err(Box::new(e)),
Err(_) => Err("DNS query timed out".into()),
}
}
}
impl DnsClientPool {
pub async fn new(pool_size: usize, server: String) -> Self {
let mut clients = Vec::with_capacity(pool_size);
for _ in 0..pool_size {
if let Ok(client) = DnsClient::new(server.clone()).await {
clients.push(Mutex::new(client));
}
}
Self { clients }
}
pub async fn query_srv(&self, name: &str, server: String) -> Result<DashMap<String, (Vec<InnerMap>, AtomicUsize)>, DnsError> {
// Try to get an available client
for client_mutex in &self.clients {
if let Ok(mut client) = client_mutex.try_lock() {
let vay = client.query_srv(name).await;
match vay {
Ok(_) => return vay,
Err(_) => {
// If query fails, drop this client and create a new one
*client = match DnsClient::new(server).await {
Ok(c) => c,
Err(e) => return Err(e),
};
// Retry with the new client
return client.query_srv(name).await;
}
}
}
}
// If all clients are busy, wait for the first one with a timeout
match tokio::time::timeout(Duration::from_secs(2), self.clients[0].lock()).await {
Ok(mut client) => client.query_srv(name).await,
Err(_) => Err("All DNS clients are busy and timeout reached".into()),
}
}
}
*/

View File

@@ -2,7 +2,7 @@ use crate::utils::parceyaml::load_configuration;
use crate::utils::structs::Configuration;
use futures::channel::mpsc::Sender;
use futures::SinkExt;
use log::{error, info, warn};
use log::error;
use notify::event::ModifyKind;
use notify::{Config, Event, EventKind, RecommendedWatcher, RecursiveMode, Watcher};
use pingora::prelude::sleep;
@@ -15,19 +15,7 @@ pub async fn start(fp: String, mut toreturn: Sender<Configuration>) {
let file_path = fp.as_str();
let parent_dir = Path::new(file_path).parent().unwrap();
let (local_tx, mut local_rx) = tokio::sync::mpsc::channel::<notify::Result<Event>>(1);
let snd = load_configuration(file_path, "filepath").await;
match snd {
Some(snd) => {
if snd.typecfg != "file" {
warn!("Disabling file watcher, requested discovery type is: {}", snd.typecfg);
return;
}
info!("Watching for changes in {:?}", parent_dir);
toreturn.send(snd).await.unwrap();
}
None => {}
}
let _watcher_handle = task::spawn_blocking({
let parent_dir = parent_dir.to_path_buf(); // Move directory path into the closure
move || {

View File

@@ -62,17 +62,32 @@ async fn build_upstreams(fullist: &UpstreamsDashMap, method: &str, client: &Clie
is_http2: is_h2,
to_https: upstream.to_https,
rate_limit: upstream.rate_limit,
healthcheck: upstream.healthcheck,
};
let resp = http_request(&link, method, "", &client).await;
if resp.0 {
if resp.1 {
scheme.is_http2 = is_h2; // could be adjusted further
if scheme.healthcheck.unwrap_or(true) {
let resp = http_request(&link, method, "", &client).await;
if resp.0 {
if resp.1 {
scheme.is_http2 = is_h2; // could be adjusted further
}
innervec.push(scheme);
} else {
warn!("Dead Upstream : {}", link);
}
innervec.push(scheme);
} else {
warn!("Dead Upstream : {}", link);
innervec.push(scheme);
}
// let resp = http_request(&link, method, "", &client).await;
// if resp.0 {
// if resp.1 {
// scheme.is_http2 = is_h2; // could be adjusted further
// }
// innervec.push(scheme);
// } else {
// warn!("Dead Upstream : {}", link);
// }
}
inner.insert(path.clone(), (innervec, AtomicUsize::new(0)));
}

134
src/utils/kuber.rs Normal file
View File

@@ -0,0 +1,134 @@
// use crate::utils::dnsclient::DnsClientPool;
use crate::utils::structs::{Configuration, InnerMap, UpstreamsDashMap};
use crate::utils::tools::{clone_dashmap_into, compare_dashmaps, print_upstreams};
use dashmap::DashMap;
use futures::channel::mpsc::Sender;
use futures::SinkExt;
use pingora::prelude::sleep;
use rand::Rng;
use reqwest::Client;
use std::env;
use std::sync::atomic::AtomicUsize;
use std::sync::Arc;
use std::time::Duration;
use tokio::fs::File;
use tokio::io::AsyncReadExt;
// static KUBERNETES_SERVICE_HOST: &str = "IP_ADDRESS";
// static TOKEN: &str = "TOKEN";
#[derive(Debug, serde::Deserialize)]
struct Endpoints {
subsets: Option<Vec<Subset>>,
}
#[derive(Debug, serde::Deserialize)]
struct Subset {
addresses: Option<Vec<Address>>,
ports: Option<Vec<Port>>,
}
#[derive(Debug, serde::Deserialize)]
struct Address {
ip: String,
}
#[derive(Debug, serde::Deserialize)]
struct Port {
// name: String,
port: u16,
}
pub async fn start(mut toreturn: Sender<Configuration>, config: Arc<Configuration>) {
let upstreams = UpstreamsDashMap::new();
let prev_upstreams = UpstreamsDashMap::new();
loop {
if let Some(kuber) = config.kubernetes.clone() {
let path = kuber.tokenpath.unwrap_or("/var/run/secrets/kubernetes.io/serviceaccount/token".to_string());
let token = read_token(path.as_str()).await;
let servers = kuber.servers.unwrap_or(vec![format!(
"{}:{}",
env::var("KUBERNETES_SERVICE_HOST").unwrap_or("0.0.0.0".to_string()),
env::var("KUBERNETES_SERVICE_PORT_HTTPS").unwrap_or("0".to_string())
)]);
let end = servers.len() - 1;
let mut num = 0;
if end > 0 {
num = rand::rng().random_range(0..end);
}
let server = servers.get(num).unwrap().to_string();
if let Some(svc) = kuber.services {
for i in svc {
let url = format!("https://{}/api/v1/namespaces/staging/endpoints/{}", server, i.real);
let list = get_by_http(&*url, &*token).await;
if let Some(list) = list {
upstreams.insert(i.proxy.clone(), list);
}
}
}
}
if !compare_dashmaps(&upstreams, &prev_upstreams) {
let tosend: Configuration = Configuration {
upstreams: Default::default(),
headers: config.headers.clone(),
consul: config.consul.clone(),
kubernetes: config.kubernetes.clone(),
typecfg: config.typecfg.clone(),
extraparams: config.extraparams.clone(),
};
clone_dashmap_into(&upstreams, &prev_upstreams);
clone_dashmap_into(&upstreams, &tosend.upstreams);
print_upstreams(&tosend.upstreams);
toreturn.send(tosend).await.unwrap();
}
sleep(Duration::from_secs(5)).await;
}
}
pub async fn get_by_http(url: &str, token: &str) -> Option<DashMap<String, (Vec<InnerMap>, AtomicUsize)>> {
let client = Client::builder().timeout(Duration::from_secs(2)).danger_accept_invalid_certs(true).build().ok()?;
let resp = client.get(url).bearer_auth(token).send().await.ok()?;
if !resp.status().is_success() {
eprintln!("Kubernetes API returned status: {}", resp.status());
return None;
}
let endpoints: Endpoints = resp.json().await.ok()?;
let upstreams: DashMap<String, (Vec<InnerMap>, AtomicUsize)> = DashMap::new();
if let Some(subsets) = endpoints.subsets {
for subset in subsets {
if let (Some(addresses), Some(ports)) = (subset.addresses, subset.ports) {
for addr in addresses {
let mut inner_vec = Vec::new();
for port in &ports {
let to_add = InnerMap {
address: addr.ip.clone(),
port: port.port.clone(),
is_ssl: false,
is_http2: false,
to_https: false,
rate_limit: None,
healthcheck: None,
};
inner_vec.push(to_add);
}
upstreams.insert("/".to_string(), (inner_vec, AtomicUsize::new(0)));
}
}
}
}
Some(upstreams)
}
async fn read_token(path: &str) -> String {
let mut file = File::open(path).await.unwrap();
let mut contents = String::new();
file.read_to_string(&mut contents).await.unwrap();
contents.trim().to_string()
}

View File

@@ -52,13 +52,12 @@ pub async fn load_configuration(d: &str, kind: &str) -> Option<Configuration> {
}
"consul" => {
toreturn.consul = parsed.consul;
if toreturn.consul.is_some() {
Some(toreturn)
} else {
None
}
toreturn.consul.is_some().then_some(toreturn)
}
"kubernetes" => {
toreturn.kubernetes = parsed.kubernetes;
toreturn.kubernetes.is_some().then_some(toreturn)
}
"kubernetes" => None,
_ => {
warn!("Unknown provider {}", parsed.provider);
None
@@ -129,8 +128,8 @@ async fn populate_file_upstreams(config: &mut Configuration, parsed: &Config) {
is_ssl: true,
is_http2: false,
to_https: path_config.to_https.unwrap_or(false),
// rate_limit: rate,
rate_limit: path_config.rate_limit,
healthcheck: path_config.healthcheck,
});
}
}

View File

@@ -21,6 +21,12 @@ pub struct Extraparams {
pub authentication: DashMap<String, Vec<String>>,
pub rate_limit: Option<isize>,
}
#[derive(Clone, Default, Debug, Serialize, Deserialize)]
pub struct Kubernetes {
pub servers: Option<Vec<String>>,
pub services: Option<Vec<ServiceMapping>>,
pub tokenpath: Option<String>,
}
#[derive(Clone, Default, Debug, Serialize, Deserialize)]
pub struct Consul {
@@ -44,6 +50,8 @@ pub struct Config {
#[serde(default)]
pub consul: Option<Consul>,
#[serde(default)]
pub kubernetes: Option<Kubernetes>,
#[serde(default)]
pub rate_limit: Option<isize>,
}
@@ -59,12 +67,14 @@ pub struct PathConfig {
pub to_https: Option<bool>,
pub headers: Option<Vec<String>>,
pub rate_limit: Option<isize>,
pub healthcheck: Option<bool>,
}
#[derive(Debug, Default)]
pub struct Configuration {
pub upstreams: UpstreamsDashMap,
pub headers: Headers,
pub consul: Option<Consul>,
pub kubernetes: Option<Kubernetes>,
pub typecfg: String,
pub extraparams: Extraparams,
}
@@ -99,6 +109,7 @@ pub struct InnerMap {
pub is_http2: bool,
pub to_https: bool,
pub rate_limit: Option<isize>,
pub healthcheck: Option<bool>,
}
#[allow(dead_code)]
@@ -111,6 +122,7 @@ impl InnerMap {
is_http2: Default::default(),
to_https: Default::default(),
rate_limit: Default::default(),
healthcheck: Default::default(),
}
}
}

View File

@@ -193,9 +193,7 @@ pub struct CipherSuite {
}
const CIPHERS: CipherSuite = CipherSuite {
high: "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305",
// aa: "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256",
medium: "ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:AES128-GCM-SHA256",
// cc: "AES128-SHA:DES-CBC3-SHA",
legacy: "ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH",
};

View File

@@ -155,6 +155,7 @@ pub fn clone_idmap_into(original: &UpstreamsDashMap, cloned: &UpstreamsIdMap) {
is_http2: false,
to_https: false,
rate_limit: None,
healthcheck: None,
};
cloned.insert(id, to_add);
cloned.insert(hh, x.to_owned());

View File

@@ -1,4 +1,5 @@
use crate::utils::discovery::{APIUpstreamProvider, ConsulProvider, Discovery, FromFileProvider};
use crate::utils::discovery::{APIUpstreamProvider, ConsulProvider, Discovery, FromFileProvider, KubernetesProvider};
use crate::utils::parceyaml::load_configuration;
use crate::utils::structs::Configuration;
use crate::utils::tools::*;
use crate::utils::*;
@@ -6,8 +7,8 @@ use crate::web::proxyhttp::LB;
use async_trait::async_trait;
use dashmap::DashMap;
use futures::channel::mpsc;
use futures::StreamExt;
use log::info;
use futures::{SinkExt, StreamExt};
use log::{error, info};
use pingora_core::server::ShutdownWatch;
use pingora_core::services::background::BackgroundService;
use std::sync::Arc;
@@ -15,23 +16,38 @@ use std::sync::Arc;
#[async_trait]
impl BackgroundService for LB {
async fn start(&self, mut shutdown: ShutdownWatch) {
info!("Starting background service");
let (tx, mut rx) = mpsc::channel::<Configuration>(0);
info!("Starting background service"); // tx: Sender<Configuration>
let (mut tx, mut rx) = mpsc::channel::<Configuration>(1);
let tx_api = tx.clone();
let config = load_configuration(self.config.upstreams_conf.clone().as_str(), "filepath")
.await
.expect("Failed to load configuration");
let tx_file = tx.clone();
let tx_consul = tx.clone();
let file_load = FromFileProvider {
path: self.config.upstreams_conf.clone(),
};
let consul_load = ConsulProvider {
path: self.config.upstreams_conf.clone(),
};
let _ = tokio::spawn(async move { file_load.start(tx_file).await });
let _ = tokio::spawn(async move { consul_load.start(tx_consul).await });
// let _ = tokio::spawn(tls::watch_certs(self.config.proxy_certificates.clone().unwrap(), self.cert_tx.clone()));
// let _ = tokio::spawn(tls::watch_certs(self.config.proxy_certificates.clone().unwrap(), self.cert_tx.clone())).await;
match config.typecfg.as_str() {
"file" => {
info!("Running File discovery, requested type is: {}", config.typecfg);
tx.send(config).await.unwrap();
let file_load = FromFileProvider {
path: self.config.upstreams_conf.clone(),
};
let _ = tokio::spawn(async move { file_load.start(tx).await });
}
"kubernetes" => {
info!("Running Kubernetes discovery, requested type is: {}", config.typecfg);
let cf = Arc::from(config);
let kuber_load = KubernetesProvider { config: cf.clone() };
let _ = tokio::spawn(async move { kuber_load.start(tx).await });
}
"consul" => {
info!("Running Consul discovery, requested type is: {}", config.typecfg);
let cf = Arc::from(config);
let consul_load = ConsulProvider { config: cf.clone() };
let _ = tokio::spawn(async move { consul_load.start(tx).await });
}
_ => {
error!("Unknown discovery type: {}", config.typecfg);
}
}
let api_load = APIUpstreamProvider {
address: self.config.config_address.clone(),
@@ -43,7 +59,7 @@ impl BackgroundService for LB {
file_server_address: self.config.file_server_address.clone(),
file_server_folder: self.config.file_server_folder.clone(),
};
let tx_api = tx.clone();
// let tx_api = tx.clone();
let _ = tokio::spawn(async move { api_load.start(tx_api).await });
let uu = self.ump_upst.clone();

View File

@@ -5,7 +5,7 @@ use crate::web::gethosts::GetHost;
use arc_swap::ArcSwap;
use async_trait::async_trait;
use axum::body::Bytes;
use log::{debug, warn};
use log::{debug, error, warn};
use once_cell::sync::Lazy;
use pingora::http::{RequestHeader, ResponseHeader, StatusCode};
use pingora::prelude::*;
@@ -67,7 +67,7 @@ impl ProxyHttp for LB {
};
let hostname = return_header_host(&session);
_ctx.hostname = hostname.clone();
_ctx.hostname = hostname;
let mut backend_id = None;
@@ -85,10 +85,11 @@ impl ProxyHttp for LB {
}
}
match hostname {
match _ctx.hostname.as_ref() {
None => return Ok(false),
Some(host) => {
let optioninnermap = self.get_host(host.as_str(), host.as_str(), backend_id);
// let optioninnermap = self.get_host(host.as_str(), host.as_str(), backend_id);
let optioninnermap = self.get_host(host.as_str(), session.req_header().uri.path(), backend_id);
match optioninnermap {
None => return Ok(false),
Some(ref innermap) => {
@@ -150,7 +151,9 @@ impl ProxyHttp for LB {
Ok(peer)
}
None => {
session.respond_error_with_body(502, Bytes::from("502 Bad Gateway\n")).await.expect("Failed to send error");
if let Err(e) = session.respond_error_with_body(502, Bytes::from("502 Bad Gateway\n")).await {
error!("Failed to send error response: {:?}", e);
}
Err(Box::new(Error {
etype: HTTPStatus(502),
esource: Upstream,
@@ -162,7 +165,10 @@ impl ProxyHttp for LB {
}
}
None => {
session.respond_error_with_body(502, Bytes::from("502 Bad Gateway\n")).await.expect("Failed to send error");
// session.respond_error_with_body(502, Bytes::from("502 Bad Gateway\n")).await.expect("Failed to send error");
if let Err(e) = session.respond_error_with_body(502, Bytes::from("502 Bad Gateway\n")).await {
error!("Failed to send error response: {:?}", e);
}
Err(Box::new(Error {
etype: HTTPStatus(502),
esource: Upstream,
@@ -174,22 +180,12 @@ impl ProxyHttp for LB {
}
}
async fn upstream_request_filter(&self, session: &mut Session, _upstream_request: &mut RequestHeader, _ctx: &mut Self::CTX) -> Result<()> {
match session.client_addr() {
Some(ip) => {
let inet = ip.as_inet();
match inet {
Some(addr) => {
_upstream_request
.insert_header("X-Forwarded-For", addr.to_string().split(':').collect::<Vec<&str>>()[0])
.unwrap();
}
None => warn!("Malformed Client IP: {:?}", inet),
}
}
None => {
warn!("Cannot detect client IP");
}
async fn upstream_request_filter(&self, _session: &mut Session, upstream_request: &mut RequestHeader, ctx: &mut Self::CTX) -> Result<()> {
if let Some(hostname) = ctx.hostname.as_ref() {
upstream_request.insert_header("Host", hostname)?;
}
if let Some(peer) = ctx.upstream_peer.as_ref() {
upstream_request.insert_header("X-Forwarded-For", peer.address.as_str())?;
}
Ok(())
}