Compare commits

..

5 Commits

Author SHA1 Message Date
f37425018b feat: use shared redis client to maximize pooling usage (#39)
All checks were successful
Cadoles/bouncer/pipeline/head This commit looks good
2024-09-23 15:16:30 +02:00
4801974ca3 fix(queue): prevent metrics update cancellation on aborted http requests (#39)
All checks were successful
Cadoles/bouncer/pipeline/head This commit looks good
2024-09-23 10:34:24 +02:00
bf15732935 feat: disable sentry integration when no dsn is defined
All checks were successful
Cadoles/bouncer/pipeline/head This commit looks good
2024-09-23 10:13:04 +02:00
8317ac5b9a feat: add configurable profiling endpoints (#38) 2024-09-23 10:12:42 +02:00
f35384c0f3 feat: create profiling package + rewrite profiling tutorial
Some checks reported warnings
Cadoles/bouncer/pipeline/head This commit was not built
2024-06-28 17:44:51 +02:00
33 changed files with 715 additions and 241 deletions

2
.gitignore vendored
View File

@ -10,4 +10,4 @@
/out
.dockerconfigjson
*.prof
proxy.test
*.test

View File

@ -17,7 +17,8 @@ GOTEST_ARGS ?= -short
OPENWRT_DEVICE ?= 192.168.1.1
SIEGE_URLS_FILE ?= misc/siege/urls.txt
SIEGE_CONCURRENCY ?= 100
SIEGE_CONCURRENCY ?= 50
SIEGE_DURATION ?= 1M
data/bootstrap.d/dummy.yml:
mkdir -p data/bootstrap.d
@ -114,7 +115,7 @@ grafterm: tools/grafterm/bin/grafterm
siege:
$(eval TMP := $(shell mktemp))
cat $(SIEGE_URLS_FILE) | envsubst > $(TMP)
siege -i -b -c $(SIEGE_CONCURRENCY) -f $(TMP)
siege -R ./misc/siege/siege.conf -i -b -c $(SIEGE_CONCURRENCY) -t $(SIEGE_DURATION) -f $(TMP)
rm -rf $(TMP)
tools/gitea-release/bin/gitea-release.sh:
@ -131,7 +132,7 @@ tools/grafterm/bin/grafterm:
GOBIN=$(PWD)/tools/grafterm/bin go install github.com/slok/grafterm/cmd/grafterm@v0.2.0
bench:
go test -bench=. -run '^$$' -count=10 ./...
go test -bench=. -run '^$$' ./internal/bench
tools/benchstat/bin/benchstat:
mkdir -p tools/benchstat/bin
@ -150,7 +151,7 @@ run-redis:
-v $(PWD)/data/redis:/data \
-p 6379:6379 \
redis:alpine3.17 \
redis-server --save 60 1 --loglevel warning
redis-server --save 60 1 --loglevel debug
redis-shell:
docker exec -it \

View File

@ -24,6 +24,7 @@
- [(FR) - Ajouter une authentification OpenID Connect](./fr/tutorials/add-oidc-authn-layer.md)
- [(FR) - Amorçage d'un serveur Bouncer via la configuration](./fr/tutorials/bootstrapping.md)
- [(FR) - Intégration avec Kubernetes](./fr/tutorials/kubernetes-integration.md)
- [(FR) - Profilage](./fr/tutorials/profiling.md)
### Développement

View File

@ -1,31 +1,68 @@
# Étudier les performances de Bouncer
1. Lancer un benchmark du proxy
## In situ
```shell
go test -bench=. -run '^$' -count=5 -cpuprofile bench_proxy.prof ./internal/proxy
```
Il est possible d'activer via la configuration de Bouncer de endpoints capable de générer des fichiers de profil au format [`pprof`](https://github.com/google/pprof). Par défaut, le point d'entrée est `.bouncer/profiling` (l'activation et la personnalisation de ce point d'entrée sont modifiables via la [configuration](../../../misc/packaging/common/config.yml)).
2. Visualiser les temps d'exécution
**Exemple:** Visualiser l'utilisation mémoire de Bouncer
```shell
go tool pprof -web bench_proxy.prof
```
```bash
go tool pprof -web http://<bouncer_proxy>/.bouncer/profiling/heap
```
3. Comparer les performances d'une exécution à l'autre
L'ensemble des profils disponibles sont visibles à l'adresse `http://<bouncer_proxy>/.bouncer/profiling`.
```shell
# Lancer un premier benchmark
go test -bench=. -run '^$' -count=10 ./internal/proxy > bench_before.txt
## En développement
# Faire des modifications sur les sources
Le package `./internal` est dédié à l'étude des performances de Bouncer. Il contient une suite de benchmarks simulant de proxies avec différentes configurations de layers afin d'évaluer les points d'engorgement sur le traitement des requêtes.
# Lancer un second benchmark
go test -bench=. -run '^$' -count=10 ./internal/proxy > bench_after.txt
Voir le répertoire `./internal/bench/testdata/proxies` pour voir les différentes configurations de cas.
# Installer l'outil benchstat
make tools/benchstat/bin/benchstat
### Lancer les benchmarks
# Comparer les rapports
tools/benchstat/bin/benchstat bench_before.txt bench_after.txt
```
Le plus simple est d'utiliser la commande `make bench` qui exécutera séquentiellement tous les benchmarks. Il est également possible de lancer un benchmark spécifique via la commande suivante:
```bash
go test -bench="BenchmarkProxies/$BENCH_CASE" -run='^$' ./internal/bench
```
Par exemple:
```bash
# Pour exécuter ./internal/bench/testdata/proxies/basic-auth.yml
go test -bench='BenchmarkProxies/basic-auth' -run='^$' ./internal/bench
```
### Visualiser les profils d'exécution
Vous pouvez visualiser les profils d'exécution via la commande suivante:
```shell
go tool pprof -web path/to/file.prof
```
Par défaut l'exécution des benchmarks créera automatiquement des fichiers de profil dans le répertoire `./internal/bench/testdata/proxies`.
Par exemple:
```shell
go tool pprof -web ./internal/bench/testdata/proxies/basic-auth.prof
```
### Comparer les évolutions
```bash
# Lancer un premier benchmark
go test -bench="BenchmarkProxies/$BENCH_CASE" -run='^$' ./internal/bench
# Faire une sauvegarde du fichier de profil
cp ./internal/bench/testdata/proxies/$BENCH_CASE.prof ./internal/bench/testdata/proxies/$BENCH_CASE-prev.prof
# Faire des modifications sur les sources
# Lancer un second benchmark
go test -bench="BenchmarkProxies/$BENCH_CASE" -run='^$' ./internal/bench
# Visualiser la différence entre les deux profils
go tool pprof -web -base=./internal/bench/testdata/proxies/$BENCH_CASE-prev.prof ./internal/bench/testdata/proxies/$BENCH_CASE.prof
```

View File

@ -27,7 +27,7 @@ func (s *Server) initRepositories(ctx context.Context) error {
}
func (s *Server) initRedisClient(ctx context.Context) error {
client := setup.NewRedisClient(ctx, s.redisConfig)
client := setup.NewSharedClient(s.redisConfig)
s.redisClient = client

View File

@ -6,6 +6,7 @@ import (
"log"
"net"
"net/http"
"net/http/pprof"
"forge.cadoles.com/cadoles/bouncer/internal/auth"
"forge.cadoles.com/cadoles/bouncer/internal/auth/jwt"
@ -155,6 +156,34 @@ func (s *Server) run(parentCtx context.Context, addrs chan net.Addr, errs chan e
})
}
if s.serverConfig.Profiling.Enabled {
profiling := s.serverConfig.Profiling
logger.Info(ctx, "enabling profiling", logger.F("endpoint", profiling.Endpoint))
router.Group(func(r chi.Router) {
if profiling.BasicAuth != nil {
logger.Info(ctx, "enabling authentication on metrics endpoint")
r.Use(middleware.BasicAuth(
"profiling",
profiling.BasicAuth.CredentialsMap(),
))
}
r.Route(string(profiling.Endpoint), func(r chi.Router) {
r.HandleFunc("/", pprof.Index)
r.HandleFunc("/cmdline", pprof.Cmdline)
r.HandleFunc("/profile", pprof.Profile)
r.HandleFunc("/symbol", pprof.Symbol)
r.HandleFunc("/trace", pprof.Trace)
r.HandleFunc("/{name}", func(w http.ResponseWriter, r *http.Request) {
name := chi.URLParam(r, "name")
pprof.Handler(name).ServeHTTP(w, r)
})
})
})
}
router.Route("/api/v1", func(r chi.Router) {
r.Group(func(r chi.Router) {
r.Use(auth.Middleware(

View File

@ -0,0 +1,300 @@
package proxy_test
import (
"context"
"io"
"log"
"net/http"
"net/http/httptest"
"net/http/httputil"
"net/url"
"os"
"path/filepath"
"runtime/pprof"
"strings"
"testing"
"time"
"forge.cadoles.com/Cadoles/go-proxy"
"forge.cadoles.com/cadoles/bouncer/internal/cache/memory"
"forge.cadoles.com/cadoles/bouncer/internal/cache/ttl"
"forge.cadoles.com/cadoles/bouncer/internal/config"
"forge.cadoles.com/cadoles/bouncer/internal/proxy/director"
"forge.cadoles.com/cadoles/bouncer/internal/store"
redisStore "forge.cadoles.com/cadoles/bouncer/internal/store/redis"
"github.com/pkg/errors"
"github.com/redis/go-redis/v9"
"gopkg.in/yaml.v3"
"forge.cadoles.com/cadoles/bouncer/internal/setup"
)
func BenchmarkProxies(b *testing.B) {
proxyFiles, err := filepath.Glob("testdata/proxies/*.yml")
if err != nil {
b.Fatalf("%+v", errors.WithStack(err))
}
for _, f := range proxyFiles {
name := strings.TrimSuffix(filepath.Base(f), filepath.Ext(f))
b.Run(name, func(b *testing.B) {
conf, err := loadProxyBenchConfig(f)
if err != nil {
b.Fatalf("%+v", errors.Wrapf(err, "could notre load bench config"))
}
proxy, backend, err := createProxy(name, conf, b.Logf)
if err != nil {
b.Fatalf("%+v", errors.Wrapf(err, "could not create proxy"))
}
defer proxy.Close()
if backend != nil {
defer backend.Close()
}
client := proxy.Client()
proxyURL, err := url.Parse(proxy.URL)
if err != nil {
b.Fatalf("%+v", errors.Wrapf(err, "could not parse proxy url"))
}
if conf.Fetch.URL.Path != "" {
proxyURL.Path = conf.Fetch.URL.Path
}
if conf.Fetch.URL.RawQuery != "" {
proxyURL.RawQuery = conf.Fetch.URL.RawQuery
}
if conf.Fetch.URL.User.Username != "" || conf.Fetch.URL.User.Password != "" {
proxyURL.User = url.UserPassword(conf.Fetch.URL.User.Username, conf.Fetch.URL.User.Password)
}
rawProxyURL := proxyURL.String()
b.Logf("fetching url '%s'", rawProxyURL)
profile, err := os.Create(filepath.Join("testdata", "proxies", name+".prof"))
if err != nil {
b.Fatalf("%+v", errors.Wrapf(err, "could not create cpu profile"))
}
defer profile.Close()
if err := pprof.StartCPUProfile(profile); err != nil {
log.Fatal(err)
}
defer pprof.StopCPUProfile()
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := client.Get(rawProxyURL)
if err != nil {
b.Errorf("could not fetch proxy url: %+v", errors.WithStack(err))
}
body, err := io.ReadAll(res.Body)
if err != nil {
b.Errorf("could not read response body: %+v", errors.WithStack(err))
}
b.Logf("%s \n %v", res.Status, string(body))
if err := res.Body.Close(); err != nil {
b.Errorf("could not close response body: %+v", errors.WithStack(err))
}
}
})
}
}
type proxyBenchConfig struct {
Proxy config.BootstrapProxyConfig `yaml:"proxy"`
Fetch fetchBenchConfig `yaml:"fetch"`
}
type fetchBenchConfig struct {
URL fetchURLBenchConfig `yaml:"url"`
}
type fetchURLBenchConfig struct {
Path string `yaml:"path"`
RawQuery string `yaml:"rawQuery"`
User fetchURLUserBenchConfig `yaml:"user"`
}
type fetchURLUserBenchConfig struct {
Username string `yaml:"username"`
Password string `yaml:"password"`
}
func loadProxyBenchConfig(filename string) (*proxyBenchConfig, error) {
data, err := os.ReadFile(filename)
if err != nil {
return nil, errors.Wrapf(err, "could not read file '%s'", filename)
}
conf := proxyBenchConfig{}
if err := yaml.Unmarshal(data, &conf); err != nil {
return nil, errors.Wrapf(err, "could not unmarshal config")
}
return &conf, nil
}
func createProxy(name string, conf *proxyBenchConfig, logf func(format string, a ...any)) (*httptest.Server, *httptest.Server, error) {
redisEndpoint := os.Getenv("BOUNCER_BENCH_REDIS_ADDR")
if redisEndpoint == "" {
redisEndpoint = "127.0.0.1:6379"
}
client := redis.NewUniversalClient(&redis.UniversalOptions{
Addrs: []string{redisEndpoint},
})
proxyRepository := redisStore.NewProxyRepository(client, redisStore.DefaultTxMaxAttempts, redisStore.DefaultTxBaseDelay)
layerRepository := redisStore.NewLayerRepository(client, redisStore.DefaultTxMaxAttempts, redisStore.DefaultTxBaseDelay)
var backend *httptest.Server
if conf.Proxy.To == "" {
backend = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
if _, err := w.Write([]byte("Hello, world.")); err != nil {
logf("[ERROR] %+v", errors.WithStack(err))
}
}))
if err := waitFor(backend.URL, 5*time.Second); err != nil {
return nil, nil, errors.WithStack(err)
}
logf("started backend '%s'", backend.URL)
}
ctx := context.Background()
proxyName := store.ProxyName("bench-" + name)
proxies, err := proxyRepository.QueryProxy(ctx)
if err != nil {
return nil, nil, errors.WithStack(err)
}
// Cleanup existing proxies
for _, p := range proxies {
if err := proxyRepository.DeleteProxy(ctx, p.Name); err != nil {
return nil, nil, errors.WithStack(err)
}
}
logf("creating proxy '%s'", proxyName)
to := string(conf.Proxy.To)
if to == "" {
to = backend.URL
}
if _, err := proxyRepository.CreateProxy(ctx, proxyName, to, conf.Proxy.From...); err != nil {
return nil, nil, errors.WithStack(err)
}
if _, err := proxyRepository.UpdateProxy(ctx, proxyName, store.WithProxyUpdateEnabled(true)); err != nil {
return nil, nil, errors.WithStack(err)
}
for layerName, layerConf := range conf.Proxy.Layers {
if err := layerRepository.DeleteLayer(ctx, proxyName, store.LayerName(layerName)); err != nil {
return nil, nil, errors.WithStack(err)
}
_, err := layerRepository.CreateLayer(ctx, proxyName, store.LayerName(layerName), store.LayerType(layerConf.Type), layerConf.Options.Data)
if err != nil {
return nil, nil, errors.WithStack(err)
}
_, err = layerRepository.UpdateLayer(ctx, proxyName, store.LayerName(layerName), store.WithLayerUpdateEnabled(bool(layerConf.Enabled)))
if err != nil {
return nil, nil, errors.WithStack(err)
}
}
layers, err := setup.GetLayers(context.Background(), config.NewDefault())
if err != nil {
return nil, nil, errors.WithStack(err)
}
director := director.New(
proxyRepository, layerRepository,
director.WithLayerCache(
ttl.NewCache(
memory.NewCache[string, []*store.Layer](),
memory.NewCache[string, time.Time](),
30*time.Second,
),
),
director.WithProxyCache(
ttl.NewCache(
memory.NewCache[string, []*store.Proxy](),
memory.NewCache[string, time.Time](),
30*time.Second,
),
),
director.WithLayers(layers...),
)
directorMiddleware := director.Middleware()
handler := proxy.New(
proxy.WithRequestTransformers(
director.RequestTransformer(),
),
proxy.WithResponseTransformers(
director.ResponseTransformer(),
),
proxy.WithReverseProxyFactory(func(ctx context.Context, target *url.URL) *httputil.ReverseProxy {
reverse := httputil.NewSingleHostReverseProxy(target)
reverse.ErrorHandler = func(w http.ResponseWriter, r *http.Request, err error) {
logf("[ERROR] %s", errors.WithStack(err))
}
return reverse
}),
)
server := httptest.NewServer(directorMiddleware(handler))
return server, backend, nil
}
func waitFor(url string, ttl time.Duration) error {
var lastErr error
timeout := time.After(ttl)
for {
select {
case <-timeout:
if lastErr != nil {
return lastErr
}
return errors.New("wait timed out")
default:
res, err := http.Get(url)
if err != nil {
lastErr = errors.WithStack(err)
continue
}
if res.StatusCode >= 200 && res.StatusCode < 400 {
return nil
}
}
}
}

View File

@ -0,0 +1,20 @@
proxy:
from: ["*"]
to: ""
layers:
basic-auth:
type: authn-basic
enabled: true
options:
users:
- username: foo
passwordHash: "$2y$10$ShTc856wMB8PCxyr46qJRO8z06MpV4UejAVRDJ/bixhu0XTGn7Giy"
attributes:
email: foo@bar.com
rules:
- set_header("Remote-User-Attr-Email", user.attrs.email)
fetch:
url:
user:
username: foo
password: bar

View File

@ -0,0 +1,3 @@
proxy:
from: ["*"]
to: ""

View File

@ -0,0 +1,12 @@
proxy:
from: ["*"]
to: ""
layers:
host-rewriter:
type: rewriter
enabled: true
options:
rules:
request:
- set_host(request.url.host)
- set_header("X-Proxied-With", "bouncer")

View File

@ -35,12 +35,15 @@ func RunCommand() *cli.Command {
logger.SetLevel(logger.Level(conf.Logger.Level))
projectVersion := ctx.String("projectVersion")
flushSentry, err := setup.SetupSentry(ctx.Context, conf.Admin.Sentry, projectVersion)
if err != nil {
return errors.Wrap(err, "could not initialize sentry client")
}
defer flushSentry()
if conf.Proxy.Sentry.DSN != "" {
flushSentry, err := setup.SetupSentry(ctx.Context, conf.Proxy.Sentry, projectVersion)
if err != nil {
return errors.Wrap(err, "could not initialize sentry client")
}
defer flushSentry()
}
integrations, err := setup.SetupIntegrations(ctx.Context, conf)
if err != nil {

View File

@ -30,12 +30,15 @@ func RunCommand() *cli.Command {
logger.SetLevel(logger.Level(conf.Logger.Level))
projectVersion := ctx.String("projectVersion")
flushSentry, err := setup.SetupSentry(ctx.Context, conf.Proxy.Sentry, projectVersion)
if err != nil {
return errors.Wrap(err, "could not initialize sentry client")
}
defer flushSentry()
if conf.Proxy.Sentry.DSN != "" {
flushSentry, err := setup.SetupSentry(ctx.Context, conf.Proxy.Sentry, projectVersion)
if err != nil {
return errors.Wrap(err, "could not initialize sentry client")
}
defer flushSentry()
}
layers, err := setup.GetLayers(ctx.Context, conf)
if err != nil {

View File

@ -1,20 +1,22 @@
package config
type AdminServerConfig struct {
HTTP HTTPConfig `yaml:"http"`
CORS CORSConfig `yaml:"cors"`
Auth AuthConfig `yaml:"auth"`
Metrics MetricsConfig `yaml:"metrics"`
Sentry SentryConfig `yaml:"sentry"`
HTTP HTTPConfig `yaml:"http"`
CORS CORSConfig `yaml:"cors"`
Auth AuthConfig `yaml:"auth"`
Metrics MetricsConfig `yaml:"metrics"`
Profiling ProfilingConfig `yaml:"profiling"`
Sentry SentryConfig `yaml:"sentry"`
}
func NewDefaultAdminServerConfig() AdminServerConfig {
return AdminServerConfig{
HTTP: NewHTTPConfig("127.0.0.1", 8081),
CORS: NewDefaultCORSConfig(),
Auth: NewDefaultAuthConfig(),
Metrics: NewDefaultMetricsConfig(),
Sentry: NewDefaultSentryConfig(),
HTTP: NewHTTPConfig("127.0.0.1", 8081),
CORS: NewDefaultCORSConfig(),
Auth: NewDefaultAuthConfig(),
Metrics: NewDefaultMetricsConfig(),
Sentry: NewDefaultSentryConfig(),
Profiling: NewDefaultProfilingConfig(),
}
}

View File

@ -80,24 +80,33 @@ func loadBootstrapDir(dir string) (map[store.ProxyName]BootstrapProxyConfig, err
proxies := make(map[store.ProxyName]BootstrapProxyConfig)
for _, f := range files {
data, err := os.ReadFile(f)
proxy, err := loadBootstrappedProxyConfig(f)
if err != nil {
return nil, errors.Wrapf(err, "could not read file '%s'", f)
}
proxy := BootstrapProxyConfig{}
if err := yaml.Unmarshal(data, &proxy); err != nil {
return nil, errors.Wrapf(err, "could not unmarshal proxy")
return nil, errors.Wrapf(err, "could not load proxy bootstrap file '%s'", f)
}
name := store.ProxyName(strings.TrimSuffix(filepath.Base(f), filepath.Ext(f)))
proxies[name] = proxy
proxies[name] = *proxy
}
return proxies, nil
}
func loadBootstrappedProxyConfig(filename string) (*BootstrapProxyConfig, error) {
data, err := os.ReadFile(filename)
if err != nil {
return nil, errors.Wrapf(err, "could not read file '%s'", filename)
}
proxy := BootstrapProxyConfig{}
if err := yaml.Unmarshal(data, &proxy); err != nil {
return nil, errors.Wrapf(err, "could not unmarshal proxy")
}
return &proxy, nil
}
func overrideProxies(base map[store.ProxyName]BootstrapProxyConfig, proxies map[store.ProxyName]BootstrapProxyConfig) map[store.ProxyName]BootstrapProxyConfig {
for name, proxy := range proxies {
base[name] = proxy

View File

@ -127,7 +127,7 @@ func (im *InterpolatedMap) UnmarshalYAML(value *yaml.Node) error {
return nil
}
func (im *InterpolatedMap) interpolateRecursive(data any) (any, error) {
func (im InterpolatedMap) interpolateRecursive(data any) (any, error) {
switch typ := data.(type) {
case map[string]any:
for key, value := range typ {

View File

@ -0,0 +1,15 @@
package config
type ProfilingConfig struct {
Enabled InterpolatedBool `yaml:"enabled"`
Endpoint InterpolatedString `yaml:"endpoint"`
BasicAuth *BasicAuthConfig `yaml:"basicAuth"`
}
func NewDefaultProfilingConfig() ProfilingConfig {
return ProfilingConfig{
Enabled: true,
Endpoint: "/.bouncer/profiling",
BasicAuth: nil,
}
}

View File

@ -10,6 +10,7 @@ type ProxyServerConfig struct {
Debug InterpolatedBool `yaml:"debug"`
HTTP HTTPConfig `yaml:"http"`
Metrics MetricsConfig `yaml:"metrics"`
Profiling ProfilingConfig `yaml:"profiling"`
Transport TransportConfig `yaml:"transport"`
Dial DialConfig `yaml:"dial"`
Sentry SentryConfig `yaml:"sentry"`
@ -27,6 +28,7 @@ func NewDefaultProxyServerConfig() ProxyServerConfig {
Sentry: NewDefaultSentryConfig(),
Cache: NewDefaultCacheConfig(),
Templates: NewDefaultTemplatesConfig(),
Profiling: NewDefaultProfilingConfig(),
}
}

View File

@ -15,6 +15,8 @@ type RedisConfig struct {
WriteTimeout InterpolatedDuration `yaml:"writeTimeout"`
DialTimeout InterpolatedDuration `yaml:"dialTimeout"`
LockMaxRetries InterpolatedInt `yaml:"lockMaxRetries"`
MaxRetries InterpolatedInt `yaml:"maxRetries"`
PingInterval InterpolatedDuration `yaml:"pingInterval"`
}
func NewDefaultRedisConfig() RedisConfig {
@ -25,5 +27,7 @@ func NewDefaultRedisConfig() RedisConfig {
WriteTimeout: InterpolatedDuration(30 * time.Second),
DialTimeout: InterpolatedDuration(30 * time.Second),
LockMaxRetries: 10,
MaxRetries: 3,
PingInterval: InterpolatedDuration(30 * time.Second),
}
}

View File

@ -28,10 +28,10 @@ func NewDefaultSentryConfig() SentryConfig {
Debug: false,
FlushTimeout: NewInterpolatedDuration(2 * time.Second),
AttachStacktrace: true,
SampleRate: 1,
SampleRate: 0.2,
EnableTracing: true,
TracesSampleRate: 0.2,
ProfilesSampleRate: 1,
ProfilesSampleRate: 0.2,
IgnoreErrors: []string{},
SendDefaultPII: false,
ServerName: "",

View File

@ -65,7 +65,7 @@ func (q *Queue) Middleware(layer *store.Layer) proxy.Middleware {
return
}
defer q.updateMetrics(ctx, layer.Proxy, layer.Name, options)
defer q.updateMetrics(layer.Proxy, layer.Name, options)
cookieName := q.getCookieName(layer.Name)
@ -217,7 +217,9 @@ func (q *Queue) refreshQueue(ctx context.Context, layerName store.LayerName, kee
}
}
func (q *Queue) updateMetrics(ctx context.Context, proxyName store.ProxyName, layerName store.LayerName, options *LayerOptions) {
func (q *Queue) updateMetrics(proxyName store.ProxyName, layerName store.LayerName, options *LayerOptions) {
ctx := context.Background()
// Update queue capacity metric
metricQueueCapacity.With(
prometheus.Labels{

View File

@ -9,7 +9,7 @@ import (
)
func (s *Server) initRepositories(ctx context.Context) error {
client := setup.NewRedisClient(ctx, s.redisConfig)
client := setup.NewSharedClient(s.redisConfig)
if err := s.initProxyRepository(ctx, client); err != nil {
return errors.WithStack(err)

View File

@ -1,156 +0,0 @@
package proxy_test
import (
"context"
"io"
"net/http"
"net/http/httptest"
"net/http/httputil"
"net/url"
"os"
"testing"
"time"
"forge.cadoles.com/Cadoles/go-proxy"
"forge.cadoles.com/cadoles/bouncer/internal/cache/memory"
"forge.cadoles.com/cadoles/bouncer/internal/cache/ttl"
"forge.cadoles.com/cadoles/bouncer/internal/proxy/director"
"forge.cadoles.com/cadoles/bouncer/internal/store"
redisStore "forge.cadoles.com/cadoles/bouncer/internal/store/redis"
"github.com/pkg/errors"
"github.com/redis/go-redis/v9"
)
func BenchmarkProxy(b *testing.B) {
redisEndpoint := os.Getenv("BOUNCER_BENCH_REDIS_ADDR")
if redisEndpoint == "" {
redisEndpoint = "127.0.0.1:6379"
}
client := redis.NewUniversalClient(&redis.UniversalOptions{
Addrs: []string{redisEndpoint},
})
proxyRepository := redisStore.NewProxyRepository(client, redisStore.DefaultTxMaxAttempts, redisStore.DefaultTxBaseDelay)
layerRepository := redisStore.NewLayerRepository(client, redisStore.DefaultTxMaxAttempts, redisStore.DefaultTxBaseDelay)
backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
if _, err := w.Write([]byte("Hello, world.")); err != nil {
b.Logf("[ERROR] %+v", errors.WithStack(err))
}
}))
defer backend.Close()
if err := waitFor(backend.URL, 5*time.Second); err != nil {
b.Fatalf("[FATAL] %+v", errors.WithStack(err))
}
b.Logf("started backend '%s'", backend.URL)
ctx := context.Background()
proxyName := store.ProxyName(b.Name())
b.Logf("creating proxy '%s'", proxyName)
if err := proxyRepository.DeleteProxy(ctx, proxyName); err != nil {
b.Fatalf("[FATAL] %+v", errors.WithStack(err))
}
if _, err := proxyRepository.CreateProxy(ctx, proxyName, backend.URL, "*"); err != nil {
b.Fatalf("[FATAL] %+v", errors.WithStack(err))
}
if _, err := proxyRepository.UpdateProxy(ctx, proxyName, store.WithProxyUpdateEnabled(true)); err != nil {
b.Fatalf("[FATAL] %+v", errors.WithStack(err))
}
director := director.New(
proxyRepository, layerRepository,
director.WithLayerCache(
ttl.NewCache(
memory.NewCache[string, []*store.Layer](),
memory.NewCache[string, time.Time](),
30*time.Second,
),
),
director.WithProxyCache(
ttl.NewCache(
memory.NewCache[string, []*store.Proxy](),
memory.NewCache[string, time.Time](),
30*time.Second,
),
),
)
directorMiddleware := director.Middleware()
handler := proxy.New(
proxy.WithRequestTransformers(
director.RequestTransformer(),
),
proxy.WithResponseTransformers(
director.ResponseTransformer(),
),
proxy.WithReverseProxyFactory(func(ctx context.Context, target *url.URL) *httputil.ReverseProxy {
reverse := httputil.NewSingleHostReverseProxy(target)
reverse.ErrorHandler = func(w http.ResponseWriter, r *http.Request, err error) {
b.Logf("[ERROR] %s", errors.WithStack(err))
}
return reverse
}),
)
server := httptest.NewServer(directorMiddleware(handler))
defer server.Close()
b.Logf("started proxy '%s'", server.URL)
httpClient := server.Client()
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := httpClient.Get(server.URL)
if err != nil {
b.Errorf("could not fetch server url: %+v", errors.WithStack(err))
}
body, err := io.ReadAll(res.Body)
if err != nil {
b.Errorf("could not read response body: %+v", errors.WithStack(err))
}
b.Logf("%s - %v", res.Status, string(body))
if err := res.Body.Close(); err != nil {
b.Errorf("could not close response body: %+v", errors.WithStack(err))
}
}
}
func waitFor(url string, ttl time.Duration) error {
var lastErr error
timeout := time.After(ttl)
for {
select {
case <-timeout:
if lastErr != nil {
return lastErr
}
return errors.New("wait timed out")
default:
res, err := http.Get(url)
if err != nil {
lastErr = errors.WithStack(err)
continue
}
if res.StatusCode >= 200 && res.StatusCode < 400 {
return nil
}
}
}
}

View File

@ -8,6 +8,7 @@ import (
"net"
"net/http"
"net/http/httputil"
"net/http/pprof"
"net/url"
"path/filepath"
"strconv"
@ -146,6 +147,34 @@ func (s *Server) run(parentCtx context.Context, addrs chan net.Addr, errs chan e
})
}
if s.serverConfig.Profiling.Enabled {
profiling := s.serverConfig.Profiling
logger.Info(ctx, "enabling profiling", logger.F("endpoint", profiling.Endpoint))
router.Group(func(r chi.Router) {
if profiling.BasicAuth != nil {
logger.Info(ctx, "enabling authentication on metrics endpoint")
r.Use(middleware.BasicAuth(
"profiling",
profiling.BasicAuth.CredentialsMap(),
))
}
r.Route(string(profiling.Endpoint), func(r chi.Router) {
r.HandleFunc("/", pprof.Index)
r.HandleFunc("/cmdline", pprof.Cmdline)
r.HandleFunc("/profile", pprof.Profile)
r.HandleFunc("/symbol", pprof.Symbol)
r.HandleFunc("/trace", pprof.Trace)
r.HandleFunc("/{name}", func(w http.ResponseWriter, r *http.Request) {
name := chi.URLParam(r, "name")
pprof.Handler(name).ServeHTTP(w, r)
})
})
})
}
router.Group(func(r chi.Router) {
r.Use(director.Middleware())

View File

@ -23,7 +23,7 @@ func init() {
}
func setupAuthnOIDCLayer(conf *config.Config) (director.Layer, error) {
rdb := newRedisClient(conf.Redis)
rdb := NewSharedClient(conf.Redis)
adapter := redis.NewStoreAdapter(rdb)
store := session.NewStore(adapter)

View File

@ -27,7 +27,7 @@ func SetupIntegrations(ctx context.Context, conf *config.Config) ([]integration.
}
func setupKubernetesIntegration(ctx context.Context, conf *config.Config) (*kubernetes.Integration, error) {
client := newRedisClient(conf.Redis)
client := NewSharedClient(conf.Redis)
locker := redis.NewLocker(client, 10)
integration := kubernetes.NewIntegration(

View File

@ -9,7 +9,7 @@ import (
)
func SetupLocker(ctx context.Context, conf *config.Config) (lock.Locker, error) {
client := newRedisClient(conf.Redis)
client := NewSharedClient(conf.Redis)
locker := redis.NewLocker(client, int(conf.Redis.LockMaxRetries))
return locker, nil
}

View File

@ -3,19 +3,11 @@ package setup
import (
"context"
"forge.cadoles.com/cadoles/bouncer/internal/config"
"forge.cadoles.com/cadoles/bouncer/internal/store"
redisStore "forge.cadoles.com/cadoles/bouncer/internal/store/redis"
"github.com/redis/go-redis/v9"
)
func NewRedisClient(ctx context.Context, conf config.RedisConfig) redis.UniversalClient {
return redis.NewUniversalClient(&redis.UniversalOptions{
Addrs: conf.Adresses,
MasterName: string(conf.Master),
})
}
func NewProxyRepository(ctx context.Context, client redis.UniversalClient) (store.ProxyRepository, error) {
return redisStore.NewProxyRepository(client, redisStore.DefaultTxMaxAttempts, redisStore.DefaultTxBaseDelay), nil
}

View File

@ -35,6 +35,6 @@ func setupQueueLayer(conf *config.Config) (director.Layer, error) {
}
func newQueueAdapter(redisConf config.RedisConfig) (queue.Adapter, error) {
rdb := newRedisClient(redisConf)
rdb := NewSharedClient(redisConf)
return queueRedis.NewAdapter(rdb, 2), nil
}

View File

@ -1,14 +1,38 @@
package setup
import (
"context"
"strings"
"sync"
"time"
"forge.cadoles.com/cadoles/bouncer/internal/config"
"github.com/pkg/errors"
"github.com/redis/go-redis/v9"
"gitlab.com/wpetit/goweb/logger"
)
var clients sync.Map
func NewSharedClient(conf config.RedisConfig) redis.UniversalClient {
key := strings.Join(conf.Adresses, "|") + "|" + string(conf.Master)
value, exists := clients.Load(key)
if exists {
if client, ok := (value).(redis.UniversalClient); ok {
return client
}
}
client := newRedisClient(conf)
clients.Store(key, client)
return client
}
func newRedisClient(conf config.RedisConfig) redis.UniversalClient {
return redis.NewUniversalClient(&redis.UniversalOptions{
client := redis.NewUniversalClient(&redis.UniversalOptions{
Addrs: conf.Adresses,
MasterName: string(conf.Master),
ReadTimeout: time.Duration(conf.ReadTimeout),
@ -16,5 +40,33 @@ func newRedisClient(conf config.RedisConfig) redis.UniversalClient {
DialTimeout: time.Duration(conf.DialTimeout),
RouteByLatency: true,
ContextTimeoutEnabled: true,
MaxRetries: int(conf.MaxRetries),
})
go func() {
ctx := logger.With(context.Background(),
logger.F("adresses", conf.Adresses),
logger.F("master", conf.Master),
)
timer := time.NewTicker(time.Duration(conf.PingInterval))
defer timer.Stop()
connected := true
for range timer.C {
if _, err := client.Ping(ctx).Result(); err != nil {
logger.Error(ctx, "redis disconnected", logger.E(errors.WithStack(err)))
connected = false
continue
}
if !connected {
logger.Info(ctx, "redis reconnected")
connected = true
}
}
}()
return client
}

View File

@ -91,12 +91,14 @@ func WithRetry(ctx context.Context, client redis.UniversalClient, key string, fn
continue
}
return err
return errors.WithStack(err)
}
return nil
}
logger.Error(ctx, "redis error", logger.E(errors.WithStack(err)))
return errors.WithStack(redis.TxFailedErr)
}

View File

@ -36,7 +36,7 @@ func (r *LayerRepository) CreateLayer(ctx context.Context, proxyName store.Proxy
CreatedAt: wrap(now),
UpdatedAt: wrap(now),
Options: wrap(store.LayerOptions{}),
Options: wrap(options),
}
txf := func(tx *redis.Tx) error {
@ -60,6 +60,11 @@ func (r *LayerRepository) CreateLayer(ctx context.Context, proxyName store.Proxy
return errors.WithStack(err)
}
layerItem, err = r.txGetLayerItem(ctx, tx, proxyName, layerName)
if err != nil {
return errors.WithStack(err)
}
return nil
}
@ -70,16 +75,16 @@ func (r *LayerRepository) CreateLayer(ctx context.Context, proxyName store.Proxy
return &store.Layer{
LayerHeader: store.LayerHeader{
Name: layerName,
Proxy: proxyName,
Type: layerType,
Weight: 0,
Enabled: false,
Name: store.LayerName(layerItem.Name),
Proxy: store.ProxyName(layerItem.Proxy),
Type: store.LayerType(layerItem.Type),
Weight: layerItem.Weight,
Enabled: layerItem.Enabled,
},
CreatedAt: now,
UpdatedAt: now,
Options: store.LayerOptions{},
CreatedAt: layerItem.CreatedAt.Value(),
UpdatedAt: layerItem.UpdatedAt.Value(),
Options: layerItem.Options.Value(),
}, nil
}

View File

@ -49,6 +49,19 @@ admin:
# Mettre à null pour désactiver l'authentification
basicAuth: null
# Profiling
profiling:
# Activer ou désactiver les endpoints de profiling
enabled: true
# Route de publication des endpoints de profiling
endpoint: /.bouncer/profiling
# Authentification "basic auth" sur les endpoints
# de profiling
# Mettre à null pour désactiver l'authentification
basicAuth:
credentials:
prof: iling
# Configuration de l'intégration Sentry
# Voir https://pkg.go.dev/github.com/getsentry/sentry-go?utm_source=godoc#ClientOptions
sentry:
@ -59,7 +72,7 @@ admin:
sampleRate: 1
enableTracing: true
tracesSampleRate: 0.2
profilesSampleRate: 1
profilesSampleRate: 0.2
ignoreErrors: []
sendDefaultPII: false
serverName: ""
@ -99,6 +112,19 @@ proxy:
credentials:
prom: etheus
# Profiling
profiling:
# Activer ou désactiver les endpoints de profiling
enabled: true
# Route de publication des endpoints de profiling
endpoint: /.bouncer/profiling
# Authentification "basic auth" sur les endpoints
# de profiling
# Mettre à null pour désactiver l'authentification
basicAuth:
credentials:
prof: iling
# Configuration de la mise en cache
# locale des données proxy/layers
cache:
@ -164,6 +190,8 @@ redis:
writeTimeout: 30s
readTimeout: 30s
dialTimeout: 30s
maxRetries: 3
pingInterval: 30s
# Configuration des logs
logger:

79
misc/siege/siege.conf Normal file
View File

@ -0,0 +1,79 @@
# Updated by Siege %_VERSION%, %_DATE%
# Copyright 2000-2016 by %_AUTHOR%
#
# Siege configuration file -- edit as necessary
# For more information about configuring and running this program,
# visit: http://www.joedog.org/
#
#
# Verbose mode: With this feature enabled, siege will print the
# result of each transaction to stdout. (Enabled by default)
#
# ex: verbose = true|false
#
verbose = true
#
# Color mode: This option works in conjunction with verbose mode.
# It tells siege whether or not it should display its output in
# color-coded output. (Enabled by default)
#
# ex: color = on | off
#
color = on
#
# Cache revalidation. Siege supports cache revalidation for both ETag
# and Last-modified headers. If a copy is still fresh, the server
# responds with 304. While this feature is required for HTTP/1.1, it
# may not be welcomed for load testing. We allow you to breach the
# protocol and turn off caching
#
# HTTP/1.1 200 0.00 secs: 2326 bytes ==> /apache_pb.gif
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
#
# Siege also supports Cache-control headers. Consider this server
# response: Cache-Control: max-age=3
# That tells siege to cache the file for three seconds. While it
# doesn't actually store the file, it will logically grab it from
# its cache. In verbose output, it designates a cached resource
# with (c):
#
# HTTP/1.1 200 0.25 secs: 159 bytes ==> GET /expires/
# HTTP/1.1 200 1.48 secs: 498419 bytes ==> GET /expires/Otter_in_Southwold.jpg
# HTTP/1.1 200 0.24 secs: 159 bytes ==> GET /expires/
# HTTP/1.1 200(C) 0.00 secs: 0 bytes ==> GET /expires/Otter_in_Southwold.jpg
#
# NOTE: with color enabled, cached URLs appear in green
#
# ex: cache = true
#
cache = true
#
# Cookie support: by default siege accepts cookies. This directive is
# available to disable that support. Set cookies to 'false' to refuse
# cookies. Set it to 'true' to accept them. The default value is true.
# If you want to maintain state with the server, then this MUST be set
# to true.
#
# ex: cookies = false
#
cookies = true
#
# Failures: This is the number of total connection failures allowed
# before siege aborts. Connection failures (timeouts, socket failures,
# etc.) are combined with 400 and 500 level errors in the final stats,
# but those errors do not count against the abort total. If you set
# this total to 10, then siege will abort after ten socket timeouts,
# but it will NOT abort after ten 404s. This is designed to prevent a
# run-away mess on an unattended siege.
#
# The default value is 1024
#
# ex: failures = 50
#
failures = -1