1
0
mirror of https://github.com/netdata/netdata.git synced 2021-06-06 23:03:21 +03:00

Added support for cross-host docker-compose builds (#7754)

The new file in the build_external directory allow cross-host builds (i.e. building Netdata for a Debian system on a Fedora host). The build and execution is wrapped inside docker containers with an appropriate user-land, based on the package builder base images. These containers can be orchestrated into more complex testing environments (e.g. master-slave streaming setups or the ACLK). Rebuilding the netdata agent inside the containers is an incremental build-step (to improve dev time) rather than a clean install.
This commit is contained in:
Andrew Moss
2020-02-21 15:06:31 +01:00
committed by GitHub
parent 19ef3f9379
commit 81e39357af
17 changed files with 7903 additions and 0 deletions

120
build_external/README.md Normal file
View File

@@ -0,0 +1,120 @@
# External build-system
This wraps the build-system in Docker so that the host system and the target system are
decoupled. This allows:
* Cross-compilation (e.g. linux development from MacOS)
* Cross-distro (e.g. using CentOS user-land while developing on Debian)
* Multi-host scenarios (e.g. master/slave configurations)
* Bleeding-edge sceneraios (e.g. using the ACLK (**currently for internal-use only**))
The advantage of these scenarios is that they allow **reproducible** builds and testing
for developers. This is the first iteration of the build-system to allow the team to use
it and get used to it.
For configurations that involve building and running the agent alone, we still use
`docker-compose` for consistency with more complex configurations. The more complex
configurations allow the agent to be run in conjunction with parts of the cloud
infrastructure (these parts of the code are not public), or with external brokers
(such as VerneMQ for MQTT), or with other external tools (such as TSDB to allow the agent to
export metrics). Note: no external TSDB scenarios are available in the first iteration,
they will be added in subsequent iterations.
This differs from the packaging dockerfiles as it designed to be used for local development.
The main difference is that these files are designed to support incremental compilation in
the following way:
1. The initial build should be performed using `bin/clean-install.sh` to create a docker
image with the agent built from the source tree and installed into standard system paths
using `netdata-installer.sh`. In addition to the steps performed by the standard packaging
builds a manifest is created to allow subsequent builds to be made incrementally using
`make` inside the container. Any libraries that are required for 'bleeding-edge' development
are added on top of the standard install.
2. When the `bin/make-install.sh` script is used the docker container will be updated with
a sanitized version of the current build-tree. The manifest will be used to line up the
state of the incoming docker cache with `make`'s view of the file-system according to the
manifest. This means the `make install` inside the container will only rebuild changes
since the last time the disk image was created.
The exact improvement on the compile-cycle depends on the speed of the network connection
to pull the netdata dependencies, but should shrink the time considerably. For example,
on a macbook pro the initial install takes about 1min + network delay [Note: there is
something bad happening with the standard installer at the end of the container build as
it tries to kill the running agent - this is very slow and bad] and the incremental
step only takes 15s. On a debian host with a fast network this reduces 1m30 -> 13s.
## Examples
1. Simple cross-compilation / cross-distro builds.
```bash
build_external/bin/clean-install.sh arch current
docker run -it --rm arch_current_dev
echo >>daemon/main.c # Simulate edit by touching file
build_external/bin/make-install.sh arch current
docker run -it --rm arch_current_dev
```
Currently there is no detection of when the installer needs to be rerun (really this is
when the `autoreconf` / `configure` step must be rerun). Netdata was not written with
multi-stage builds in mind and we need to work out how to do this in the future. For now
it is up to you to know when you need to rerun the clean build step.
```bash
build_external/bin/clean-install.sh arch current
build_external/bin/clean-install.sh ubuntu 19.10
docker run -it --rm arch_current_dev
echo >>daemon/main.c # Simulate edit by touching file
build_external/bin/make-install.sh arch current
docker run -it --rm arch_current_dev
echo >>daemon/daemon.c # Simulate second edit step
build_external/bin/make-install.sh arch current # Observe a single file is rebuilt
build_external/bin/make-install.sh arch current # Observe both files are rebuilt
```
The state of the build in the two containers is independent.
2. Single agent config in docker-compose
This functions the same as the previous example but is wrapped in docker-compose to
allow injection into more complex test configurations.
```bash
Distro=debian Version=10 docker-compose -f projects/only-agent/docker-compose.yml up
```
Note: it is possible to run multiple copies of the agent using the `--scale` option for
`docker-compose up`.
```bash
Distro=debian Version=10 docker-compose -f projects/only-agent/docker-compose.yml up --scale agent=3
```
3. A simple master-slave scenario
```bash
# Need to call clean-install on the configs used in the master/slave containers
docker-compose -f master-slaves/docker-compose.yml up --scale agent_slave1=2
```
Note: this is not production ready yet, but it is left in so that we can see how it behaves
and improve it. Currently it produces the following problems:
* Only the base-configuration in the compose without scaling works.
* The containers are hard-coded in the compose.
* There is no way to separate the agent configurations, so running multiple agent slaves
wth the same GUID kills the master which exits with a fatal condition.
4. The ACLK
This is for internal use only as it requires access to a private repo. Clone the vernemq-docker
repo and follow the instructions within to build an image called `vernemq`.
```bash
build_external/bin/clean-install.sh arch current # Only needed first time
docker-compose -f build_external/projects/aclk-testing/vernemq-compose.yml -f build_external/projects/aclk-testing/agent-compose.yml up --build
```
Notes:
* We are currently limited to arch because of restrictions on libwebsockets
* There is not yet a good way to configure the target agent container from the docker-compose command line.
* Several other containers should be in this compose (a paho client, tshark etc).

View File

@@ -0,0 +1,49 @@
#!/usr/bin/env bash
DISTRO="$1"
VERSION="$2"
BuildBase="$(cd "$(dirname "$0")" && cd .. && pwd)"
# This is temporary - not all of the package-builder images from the helper-images repo
# are available on Docker Hub. When everything falls under the "happy case" below this
# can be deleted in a future iteration. This is written in a weird way for portability,
# can't rely on bash 4.0+ to allow case fall-through with ;&
if cat <<HAPPY_CASE | grep "$DISTRO-$VERSION"
opensuse-15.1
fedora-29
debian-9
debian-8
fedora-30
opensuse-15.0
ubuntu-19.04
centos-7
fedora-31
ubuntu-16.04
ubuntu-18.04
ubuntu-19.10
debian-10
centos-8
ubuntu-1804
ubuntu-1904
ubuntu-1910
debian-stretch
debian-jessie
debian-buster
HAPPY_CASE
then
docker build -f "$BuildBase/clean-install.Dockerfile" -t "${DISTRO}_${VERSION}_dev" "$BuildBase/.." \
--build-arg "DISTRO=$DISTRO" --build-arg "VERSION=$VERSION" --build-arg ACLK=yes \
--build-arg EXTRA_CFLAGS="-DACLK_SSL_ALLOW_SELF_SIGNED"
else
case "$DISTRO-$VERSION" in
arch-current)
docker build -f "$BuildBase/clean-install-arch.Dockerfile" -t "${DISTRO}_${VERSION}_dev" "$BuildBase/.." \
--build-arg "DISTRO=$DISTRO" --build-arg "VERSION=$VERSION" --build-arg ACLK=yes \
--build-arg EXTRA_CFLAGS="-DACLK_SSL_ALLOW_SELF_SIGNED"
;;
*)
echo "Unknown $DISTRO-$VERSION"
;;
esac
fi

View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
DISTRO="$1"
VERSION="$2"
BuildBase="$(cd "$(dirname "$0")" && cd .. && pwd)"
docker build -f "$BuildBase/make-install.Dockerfile" -t "${DISTRO}_${VERSION}_dev:latest" "$BuildBase/.." \
--build-arg "DISTRO=${DISTRO}" --build-arg "VERSION=${VERSION}"

View File

@@ -0,0 +1,52 @@
FROM archlinux/base:latest
# There is some redundancy between this file and the archlinux Dockerfile in the helper images
# repo and also with the clean-install.Dockefile. Once the help image is availabled on Docker
# Hub this file can be deleted.
RUN pacman -Sy
RUN pacman --noconfirm --needed -S autoconf \
autoconf-archive \
autogen \
automake \
gcc \
make \
git \
libuv \
lz4 \
netcat \
openssl \
pkgconfig \
python \
libvirt \
libwebsockets
ARG ACLK=no
ARG EXTRA_CFLAGS
COPY . /opt/netdata/source
WORKDIR /opt/netdata/source
RUN git config --global user.email "root@container"
RUN git config --global user.name "Fake root"
# RUN make distclean -> not safe if tree state changed on host since last config
# Kill everything that is not in .gitignore preserving any fresh changes, i.e. untracked changes will be
# deleted but local changes to tracked files will be preserved.
RUN if git status --porcelain | grep '^[MADRC]'; then \
git stash && git clean -dxf && (git stash apply || true) \
else \
git clean -dxf ; \
fi
# Not everybody is updating distclean properly - fix.
RUN find . -name '*.Po' -exec rm \{\} \;
RUN rm -rf autom4te.cache
RUN rm -rf .git/
RUN find . -type f >/opt/netdata/manifest
RUN CFLAGS="-O1 -ggdb -Wall -Wextra -Wformat-signedness -fstack-protector-all -DNETDATA_INTERNAL_CHECKS=1\
-D_FORTIFY_SOURCE=2 -DNETDATA_VERIFY_LOCKS=1 ${EXTRA_CFLAGS}" ./netdata-installer.sh --disable-lto
RUN ln -sf /dev/stdout /var/log/netdata/access.log
RUN ln -sf /dev/stdout /var/log/netdata/debug.log
RUN ln -sf /dev/stderr /var/log/netdata/error.log

View File

@@ -0,0 +1,36 @@
ARG DISTRO=arch
ARG VERSION=current
FROM netdata/package-builders:${DISTRO}${VERSION}
ARG ACLK=no
ARG EXTRA_CFLAGS
COPY . /opt/netdata/source
WORKDIR /opt/netdata/source
RUN git config --global user.email "root@container"
RUN git config --global user.name "Fake root"
# RUN make distclean -> not safe if tree state changed on host since last config
# Kill everything that is not in .gitignore preserving any fresh changes, i.e. untracked changes will be
# deleted but local changes to tracked files will be preserved.
RUN if git status --porcelain | grep '^[MADRC]'; then \
git stash && git clean -dxf && (git stash apply || true) \
else \
git clean -dxf ; \
fi
# Not everybody is updating distclean properly - fix.
RUN find . -name '*.Po' -exec rm \{\} \;
RUN rm -rf autom4te.cache
RUN rm -rf .git/
RUN find . -type f >/opt/netdata/manifest
RUN CFLAGS="-O1 -ggdb -Wall -Wextra -Wformat-signedness -fstack-protector-all -DNETDATA_INTERNAL_CHECKS=1\
-D_FORTIFY_SOURCE=2 -DNETDATA_VERIFY_LOCKS=1 ${EXTRA_CFLAGS}" ./netdata-installer.sh --disable-lto
RUN ln -sf /dev/stdout /var/log/netdata/access.log
RUN ln -sf /dev/stdout /var/log/netdata/debug.log
RUN ln -sf /dev/stderr /var/log/netdata/error.log
CMD ["/usr/sbin/netdata","-D"]

View File

@@ -0,0 +1,11 @@
ARG DISTRO=arch
ARG VERSION=current
FROM ${DISTRO}_${VERSION}_dev:latest
# Sanitize new source tree by removing config-time state
COPY . /opt/netdata/latest
WORKDIR /opt/netdata/latest
RUN while read -r f; do cp -p "$f" "../source/$f"; done <../manifest
WORKDIR /opt/netdata/source
RUN make install

View File

@@ -0,0 +1,19 @@
version: '3.3'
services:
agent_master:
build:
context: ../../..
dockerfile: build_external/make-install.Dockerfile
args:
- DISTRO=arch
- VERSION=current
image: arch_current_dev:latest
command: >
sh -c "echo -n 00000000-0000-0000-0000-000000000000 >/etc/netdata/claim.d/claimed_id &&
echo '[agent_cloud_link]' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link hostname = vernemq' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link port = 9002' >>/etc/netdata/netdata.conf &&
/usr/sbin/netdata -D"
ports:
- 20000:19999

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,8 @@
FROM vernemq:latest
EXPOSE 9002
COPY vernemq.conf /vernemq/etc/vernemq.conf
WORKDIR /vernemq
#RUN openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 365 -out server.crt -subj '/CN=vernemq'
RUN openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -out server.crt -keyout server.key -subj "/C=SK/ST=XX/L=XX/O=NetdataIsAwesome/OU=NotSupremeLeader/CN=netdata.cloud"
RUN chown vernemq:vernemq /vernemq/server.key /vernemq/server.crt
RUN cat /vernemq/etc/vernemq.conf

View File

@@ -0,0 +1,51 @@
version: '3.3'
services:
agent_master:
build:
context: ../../..
dockerfile: build_external/make_install_fedora_30.Dockerfile
args:
ACLK: "yes"
image: fedora_30_dev
command: >
sh -c "echo 00000000-0000-0000-0000-000000000000 >/etc/netdata/claim.d/claimed_id &&
echo '[agent_cloud_link]' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link hostname = 172.22.0.100' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link port = 8080' >>/etc/netdata/netdata.conf &&
/usr/sbin/netdata -D"
ports:
- 20000:19999
networks:
service1_net:
ipv4_address: 172.22.0.99
#volumes:
#- ./master_stream.conf:/etc/netdata/stream.conf:ro
# agent_slave1:
# image: debian_buster_dev
# command: /usr/sbin/netdata -D
# ports:
# - 20001:19999
# volumes:
# - ./slave_stream.conf:/etc/netdata/stream.conf:ro
# agent_slave2:
# image: ubuntu_2004_dev
# command: /usr/sbin/netdata -D
# ports:
# - 20002:19999
# volumes:
# - ./slave_stream.conf:/etc/netdata/stream.conf:ro
vernemq:
build:
dockerfile: configureVerneMQ.Dockerfile
context: .
networks:
service1_net:
ipv4_address: 172.22.0.100
networks:
service1_net:
ipam:
driver: default
config:
- subnet: 172.22.0.0/16

View File

@@ -0,0 +1,7 @@
version: '3.3'
services:
vernemq:
build:
dockerfile: configureVerneMQ.Dockerfile
context: .

View File

@@ -0,0 +1,68 @@
allow_anonymous = on
allow_register_during_netsplit = off
allow_publish_during_netsplit = off
allow_subscribe_during_netsplit = off
allow_unsubscribe_during_netsplit = off
allow_multiple_sessions = off
coordinate_registrations = on
max_inflight_messages = 20
max_online_messages = 1000
max_offline_messages = 1000
max_message_size = 0
upgrade_outgoing_qos = off
listener.max_connections = 10000
listener.nr_of_acceptors = 10
listener.tcp.default = 127.0.0.1:1883
listener.wss.keyfile = /vernemq/server.key
listener.wss.certfile = /vernemq/server.crt
listener.wss.default = 0.0.0.0:9002
listener.vmq.clustering = 0.0.0.0:44053
listener.http.default = 127.0.0.1:8888
listener.ssl.require_certificate = off
listener.wss.require_certificate = off
systree_enabled = on
systree_interval = 20000
graphite_enabled = off
graphite_host = localhost
graphite_port = 2003
graphite_interval = 20000
shared_subscription_policy = prefer_local
plugins.vmq_passwd = on
plugins.vmq_acl = on
plugins.vmq_diversity = off
plugins.vmq_webhooks = off
plugins.vmq_bridge = off
metadata_plugin = vmq_plumtree
vmq_acl.acl_file = ./etc/vmq.acl
vmq_acl.acl_reload_interval = 10
vmq_passwd.password_file = ./etc/vmq.passwd
vmq_passwd.password_reload_interval = 10
vmq_diversity.script_dir = ./share/lua
vmq_diversity.auth_postgres.enabled = off
vmq_diversity.postgres.ssl = off
vmq_diversity.postgres.password_hash_method = crypt
vmq_diversity.auth_cockroachdb.enabled = off
vmq_diversity.cockroachdb.ssl = on
vmq_diversity.cockroachdb.password_hash_method = bcrypt
vmq_diversity.auth_mysql.enabled = off
vmq_diversity.mysql.password_hash_method = password
vmq_diversity.auth_mongodb.enabled = off
vmq_diversity.mongodb.ssl = off
vmq_diversity.auth_redis.enabled = off
vmq_bcrypt.pool_size = 1
log.console = file
log.console.level = info
log.console.file = ./log/console.log
log.error.file = ./log/error.log
log.syslog = off
log.crash = on
log.crash.file = ./log/crash.log
log.crash.maximum_message_size = 64KB
log.crash.size = 10MB
log.crash.rotation = $D0
log.crash.rotation.keep = 5
nodename = VerneMQ@127.0.0.1
distributed_cookie = vmq
erlang.async_threads = 64
erlang.max_ports = 262144
leveldb.maximum_memory.percent = 70

View File

@@ -0,0 +1,23 @@
version: '3.3'
services:
agent_master:
image: debian_10_dev
command: /usr/sbin/netdata -D
ports:
- 20000:19999
volumes:
- ./master_stream.conf:/etc/netdata/stream.conf:ro
agent_slave1:
image: debian_9_dev
command: /usr/sbin/netdata -D
#ports: Removed to allow scaling
#- 20001:19999
volumes:
- ./slave_stream.conf:/etc/netdata/stream.conf:ro
agent_slave2:
image: fedora_30_dev
command: /usr/sbin/netdata -D
#ports: Removed to allow scaling
#- 20002:19999
volumes:
- ./slave_stream.conf:/etc/netdata/stream.conf:ro

View File

@@ -0,0 +1,140 @@
# netdata configuration for aggregating data from remote hosts
#
# API keys authorize a pair of sending-receiving netdata servers.
# Once their communication is authorized, they can exchange metrics for any
# number of hosts.
#
# You can generate API keys, with the linux command: uuidgen
# -----------------------------------------------------------------------------
# 2. ON MASTER NETDATA - THE ONE THAT WILL BE RECEIVING METRICS
# You can have one API key per slave,
# or the same API key for all slaves.
#
# netdata searches for options in this order:
#
# a) master netdata settings (netdata.conf)
# b) [stream] section (above)
# c) [API_KEY] section (below, settings for the API key)
# d) [MACHINE_GUID] section (below, settings for each machine)
#
# You can combine the above (the more specific setting will be used).
# API key authentication
# If the key is not listed here, it will not be able to push metrics.
# [API_KEY] is [YOUR-API-KEY], i.e [11111111-2222-3333-4444-555555555555]
[00000000-0000-0000-0000-000000000000]
# Default settings for this API key
# You can disable the API key, by setting this to: no
# The default (for unknown API keys) is: no
enabled = yes
# A list of simple patterns matching the IPs of the servers that
# will be pushing metrics using this API key.
# The metrics are received via the API port, so the same IPs
# should also be matched at netdata.conf [web].allow connections from
allow from = *
# The default history in entries, for all hosts using this API key.
# You can also set it per host below.
# If you don't set it here, the history size of the central netdata
# will be used.
default history = 3600
# The default memory mode to be used for all hosts using this API key.
# You can also set it per host below.
# If you don't set it here, the memory mode of netdata.conf will be used.
# Valid modes:
# save save on exit, load on start
# map like swap (continuously syncing to disks - you need SSD)
# ram keep it in RAM, don't touch the disk
# none no database at all (use this on headless proxies)
# dbengine like a traditional database
# default memory mode = ram
# Shall we enable health monitoring for the hosts using this API key?
# 3 possible values:
# yes enable alarms
# no do not enable alarms
# auto enable alarms, only when the sending netdata is connected. For ephemeral slaves or slave system restarts,
# ensure that the netdata process on the slave is gracefully stopped, to prevent invalid last_collected alarms
# You can also set it per host, below.
# The default is taken from [health].enabled of netdata.conf
health enabled by default = auto
# postpone alarms for a short period after the sender is connected
default postpone alarms on connect seconds = 60
# allow or deny multiple connections for the same host?
# If you are sure all your netdata have their own machine GUID,
# set this to 'allow', since it allows faster reconnects.
# When set to 'deny', new connections for a host will not be
# accepted until an existing connection is cleared.
multiple connections = allow
# need to route metrics differently? set these.
# the defaults are the ones at the [stream] section (above)
#default proxy enabled = yes | no
#default proxy destination = IP:PORT IP:PORT ...
#default proxy api key = API_KEY
#default proxy send charts matching = *
# -----------------------------------------------------------------------------
# 3. PER SENDING HOST SETTINGS, ON MASTER NETDATA
# THIS IS OPTIONAL - YOU DON'T HAVE TO CONFIGURE IT
# This section exists to give you finer control of the master settings for each
# slave host, when the same API key is used by many netdata slaves / proxies.
#
# Each netdata has a unique GUID - generated the first time netdata starts.
# You can find it at /var/lib/netdata/registry/netdata.public.unique.id
# (at the slave).
#
# The host sending data will have one. If the host is not ephemeral,
# you can give settings for each sending host here.
[MACHINE_GUID]
# enable this host: yes | no
# When disabled, the master will not receive metrics for this host.
# THIS IS NOT A SECURITY MECHANISM - AN ATTACKER CAN SET ANY OTHER GUID.
# Use only the API key for security.
enabled = no
# A list of simple patterns matching the IPs of the servers that
# will be pushing metrics using this MACHINE GUID.
# The metrics are received via the API port, so the same IPs
# should also be matched at netdata.conf [web].allow connections from
# and at stream.conf [API_KEY].allow from
allow from = *
# The number of entries in the database
history = 3600
# The memory mode of the database: save | map | ram | none | dbengine
memory mode = save
# Health / alarms control: yes | no | auto
health enabled = yes
# postpone alarms when the sender connects
postpone alarms on connect seconds = 60
# allow or deny multiple connections for the same host?
# If you are sure all your netdata have their own machine GUID,
# set this to 'allow', since it allows faster reconnects.
# When set to 'deny', new connections for a host will not be
# accepted until an existing connection is cleared.
multiple connections = allow
# need to route metrics differently?
# the defaults are the ones at the [API KEY] section
#proxy enabled = yes | no
#proxy destination = IP:PORT IP:PORT ...
#proxy api key = API_KEY
#proxy send charts matching = *

View File

@@ -0,0 +1,144 @@
# netdata configuration for aggregating data from remote hosts
#
# API keys authorize a pair of sending-receiving netdata servers.
# Once their communication is authorized, they can exchange metrics for any
# number of hosts.
#
# You can generate API keys, with the linux command: uuidgen
# -----------------------------------------------------------------------------
# 1. ON SLAVE NETDATA - THE ONE THAT WILL BE SENDING METRICS
[stream]
# Enable this on slaves, to have them send metrics.
enabled = yes
# Where is the receiving netdata?
# A space separated list of:
#
# [PROTOCOL:]HOST[%INTERFACE][:PORT][:SSL]
#
# If many are given, the first available will get the metrics.
#
# PROTOCOL = tcp, udp, or unix (only tcp and unix are supported by masters)
# HOST = an IPv4, IPv6 IP, or a hostname, or a unix domain socket path.
# IPv6 IPs should be given with brackets [ip:address]
# INTERFACE = the network interface to use (only for IPv6)
# PORT = the port number or service name (/etc/services)
# SSL = when this word appear at the end of the destination string
# the Netdata will do encrypt connection with the master.
#
# This communication is not HTTP (it cannot be proxied by web proxies).
destination = tcp:agent_master
# Skip Certificate verification?
#
# The netdata slave is configurated to avoid invalid SSL/TLS certificate,
# so certificates that are self-signed or expired will stop the streaming.
# Case the server certificate is not valid, you can enable the use of
# 'bad' certificates setting the next option as 'yes'.
#
#ssl skip certificate verification = yes
# Certificate Authority Path
#
# OpenSSL has a default directory where the known certificates are stored,
# case it is necessary it is possible to change this rule using the variable
# "CApath"
#
#CApath = /etc/ssl/certs/
# Certificate Authority file
#
# When the Netdata master has certificate, that is not recognized as valid,
# we can add this certificate in the list of known certificates in CApath
# and give for Netdata as argument.
#
#CAfile = /etc/ssl/certs/cert.pem
# The API_KEY to use (as the sender)
api key = 00000000-0000-0000-0000-000000000000
# The timeout to connect and send metrics
timeout seconds = 60
# If the destination line above does not specify a port, use this
default port = 19999
# filter the charts to be streamed
# netdata SIMPLE PATTERN:
# - space separated list of patterns (use \ to include spaces in patterns)
# - use * as wildcard, any number of times within each pattern
# - prefix a pattern with ! for a negative match (ie not stream the charts it matches)
# - the order of patterns is important (left to right)
# To send all except a few, use: !this !that * (ie append a wildcard pattern)
send charts matching = *
# The buffer to use for sending metrics.
# 1MB is good for 10-20 seconds of data, so increase this if you expect latencies.
# The buffer is flushed on reconnects (this will not prevent gaps at the charts).
buffer size bytes = 1048576
# If the connection fails, or it disconnects,
# retry after that many seconds.
reconnect delay seconds = 5
# Sync the clock of the charts for that many iterations, when starting.
initial clock resync iterations = 60
# -----------------------------------------------------------------------------
# 3. PER SENDING HOST SETTINGS, ON MASTER NETDATA
# THIS IS OPTIONAL - YOU DON'T HAVE TO CONFIGURE IT
# This section exists to give you finer control of the master settings for each
# slave host, when the same API key is used by many netdata slaves / proxies.
#
# Each netdata has a unique GUID - generated the first time netdata starts.
# You can find it at /var/lib/netdata/registry/netdata.public.unique.id
# (at the slave).
#
# The host sending data will have one. If the host is not ephemeral,
# you can give settings for each sending host here.
[MACHINE_GUID]
# enable this host: yes | no
# When disabled, the master will not receive metrics for this host.
# THIS IS NOT A SECURITY MECHANISM - AN ATTACKER CAN SET ANY OTHER GUID.
# Use only the API key for security.
enabled = no
# A list of simple patterns matching the IPs of the servers that
# will be pushing metrics using this MACHINE GUID.
# The metrics are received via the API port, so the same IPs
# should also be matched at netdata.conf [web].allow connections from
# and at stream.conf [API_KEY].allow from
allow from = *
# The number of entries in the database
history = 3600
# The memory mode of the database: save | map | ram | none | dbengine
memory mode = save
# Health / alarms control: yes | no | auto
health enabled = yes
# postpone alarms when the sender connects
postpone alarms on connect seconds = 60
# allow or deny multiple connections for the same host?
# If you are sure all your netdata have their own machine GUID,
# set this to 'allow', since it allows faster reconnects.
# When set to 'deny', new connections for a host will not be
# accepted until an existing connection is cleared.
multiple connections = allow
# need to route metrics differently?
# the defaults are the ones at the [API KEY] section
#proxy enabled = yes | no
#proxy destination = IP:PORT IP:PORT ...
#proxy api key = API_KEY
#proxy send charts matching = *

View File

@@ -0,0 +1,8 @@
version: '3'
services:
agent:
image: ${Distro}_${Version}_dev
command: /usr/sbin/netdata -D
ports:
- 80
- 443

View File

@@ -96,6 +96,7 @@ renice 19 $$ > /dev/null 2> /dev/null
LDFLAGS="${LDFLAGS}"
CFLAGS="${CFLAGS--O2}"
[ "z${CFLAGS}" = "z-O3" ] && CFLAGS="-O2"
ACLK="${ACLK}"
# keep a log of this command
# shellcheck disable=SC2129