mirror of
https://github.com/Icinga/icingabeat.git
synced 2025-04-08 17:15:05 +02:00
Compare commits
No commits in common. "master" and "v1.1.1" have entirely different histories.
46
.github/ISSUE_TEMPLATE.md
vendored
46
.github/ISSUE_TEMPLATE.md
vendored
@ -1,46 +0,0 @@
|
||||
<!--- Provide a general summary of the issue in the Title above -->
|
||||
|
||||
<!-- Formatting tips:
|
||||
|
||||
GitHub supports Markdown: https://guides.github.com/features/mastering-markdown/
|
||||
Multi-line code blocks either with three back ticks, or four space indent.
|
||||
|
||||
```
|
||||
# Defines the Icinga API endpoint
|
||||
host: "localhost"
|
||||
```
|
||||
-->
|
||||
|
||||
## Expected Behavior
|
||||
<!--- If you're describing a bug, tell us what should happen -->
|
||||
<!--- If you're suggesting a change/improvement, tell us how it should work -->
|
||||
|
||||
## Current Behavior
|
||||
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
|
||||
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
|
||||
|
||||
## Possible Solution
|
||||
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
|
||||
<!--- or ideas how to implement: the addition or change -->
|
||||
|
||||
## Steps to Reproduce (for bugs)
|
||||
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
|
||||
<!--- reproduce this bug. Include configuration, logs, etc. to reproduce, if relevant -->
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
4.
|
||||
|
||||
## Context
|
||||
<!--- How has this issue affected you? What are you trying to accomplish? -->
|
||||
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
|
||||
|
||||
## Your Environment
|
||||
<!--- Include as many relevant details about the environment you experienced the problem in -->
|
||||
* Beat version (`icingabeat -version`):
|
||||
* Icinga 2 version (`icinga2 --version`):
|
||||
* Elasticsearch version (`curl -XGET 'localhost:9200'`):
|
||||
* Logstash version, if used (`bin/logstash -V`):
|
||||
* Kibana version, if used (`curl -XGET http://localhost:5601/status -I`):
|
||||
* Operating System and version:
|
||||
|
@ -6,10 +6,12 @@ services:
|
||||
language: go
|
||||
|
||||
go:
|
||||
- "1.13"
|
||||
- 1.7
|
||||
- 1.8
|
||||
|
||||
os:
|
||||
- linux
|
||||
- osx
|
||||
|
||||
env:
|
||||
matrix:
|
||||
@ -31,8 +33,6 @@ before_install:
|
||||
- export TRAVIS_BUILD_DIR=$HOME/gopath/src/github.com/icinga/icingabeat/
|
||||
- cd $HOME/gopath/src/github.com/icinga/icingabeat/
|
||||
- go get github.com/Masterminds/glide
|
||||
- go get github.com/magefile/mage/mg
|
||||
- go get github.com/sirupsen/logrus
|
||||
|
||||
install:
|
||||
- true
|
||||
|
2
AUTHORS
2
AUTHORS
@ -1,4 +1,2 @@
|
||||
Alexander <35256191+lx183@users.noreply.github.com>
|
||||
Blerim Sheqa <blerim.sheqa@icinga.com>
|
||||
Dorian Lenzner <Dorian.Lenzner@telekom.de>
|
||||
Michael Friedrich <michael.friedrich@icinga.com>
|
||||
|
56
CHANGELOG.md
56
CHANGELOG.md
@ -1,61 +1,5 @@
|
||||
# Icingabeat CHANGELOG
|
||||
|
||||
## v7.17.4
|
||||
|
||||
### Features
|
||||
* Update libbeat to version 7.14.2
|
||||
|
||||
### Breaking Changes
|
||||
* Dashboards now must be importaed manually using Kibana
|
||||
|
||||
## v7.14.2
|
||||
|
||||
### Features
|
||||
* Update libbeat to version 7.14.2
|
||||
|
||||
## v7.5.2
|
||||
|
||||
### Features
|
||||
* Update libbeat to version 7.5.2
|
||||
|
||||
## v7.4.2
|
||||
|
||||
### Features
|
||||
* Update libbeat to version 7.4.2
|
||||
|
||||
## v6.5.4
|
||||
|
||||
### Features
|
||||
* Update libbeat to version 6.5.4
|
||||
* Move all field names to 'icinga' namespace
|
||||
|
||||
### Bugs
|
||||
* Prevent usage of reserved keywords
|
||||
|
||||
## v6.3.3
|
||||
|
||||
### Features
|
||||
* Update libbeat to version 6.3.3
|
||||
|
||||
### Bugs
|
||||
* Remove `zones` key from statuspoller. This key may become to big to process.
|
||||
* Catch 404 return codes
|
||||
* Update dashboard directory schema so `icingabeat setup` works out of the box
|
||||
|
||||
## v6.1.1
|
||||
|
||||
### Features
|
||||
* Update libbeat to version 6.1.1
|
||||
* Add setting to add custom CAs for SSL verification
|
||||
|
||||
### Bugs
|
||||
* Close connections properly on failed authentication
|
||||
|
||||
## v5.6.6
|
||||
|
||||
### Features
|
||||
* Update libbeat to version 5.6.6
|
||||
|
||||
## v1.1.1
|
||||
|
||||
### Bugs
|
||||
|
55
Makefile
55
Makefile
@ -1,22 +1,53 @@
|
||||
BEAT_NAME=icingabeat
|
||||
BEAT_DIR=github.com/icinga/icingabeat
|
||||
BEAT_DESCRIPTION=Icingabeat ships Icinga 2 events and states to Elasticsearch or Logstash.
|
||||
BEAT_VENDOR=Icinga
|
||||
BEAT_DOC_URL=https://github.com/Icinga/icingabeat
|
||||
BEAT_PATH=github.com/icinga/icingabeat
|
||||
BEAT_DOC_URL?=https://icinga.com/docs/icingabeat
|
||||
BEAT_GOPATH=$(firstword $(subst :, ,${GOPATH}))
|
||||
SYSTEM_TESTS=false
|
||||
TEST_ENVIRONMENT=false
|
||||
ES_BEATS_IMPORT_PATH=github.com/elastic/beats/v7
|
||||
ES_BEATS?=$(shell go list -m -f '{{.Dir}}' ${ES_BEATS_IMPORT_PATH})
|
||||
LIBBEAT_MAKEFILE=$(ES_BEATS)/libbeat/scripts/Makefile
|
||||
GOPACKAGES=$(shell go list ${BEAT_PATH}/... | grep -v /tools)
|
||||
GOBUILD_FLAGS=-i -ldflags "-X ${ES_BEATS_IMPORT_PATH}/libbeat/version.buildTime=$(NOW) -X ${ES_BEATS_IMPORT_PATH}/libbeat/version.commit=$(COMMIT_ID)"
|
||||
MAGE_IMPORT_PATH=github.com/magefile/mage
|
||||
NO_COLLECT=true
|
||||
CHECK_HEADERS_DISABLED=true
|
||||
ES_BEATS?=./vendor/github.com/elastic/beats
|
||||
GOPACKAGES=$(shell glide novendor)
|
||||
PREFIX?=.
|
||||
|
||||
#TARGETS="linux/amd64 linux/386 windows/amd64 windows/386 darwin/amd64"
|
||||
#PACKAGES=${BEATNAME}/deb ${BEATNAME}/rpm ${BEATNAME}/darwin ${BEATNAME}/win ${BEATNAME}/bin
|
||||
#SNAPSHOT=false
|
||||
|
||||
# Path to the libbeat Makefile
|
||||
-include $(LIBBEAT_MAKEFILE)
|
||||
-include $(ES_BEATS)/libbeat/scripts/Makefile
|
||||
|
||||
# Initial beat setup
|
||||
.PHONY: setup
|
||||
setup: copy-vendor
|
||||
make update
|
||||
|
||||
# Copy beats into vendor directory
|
||||
.PHONY: copy-vendor
|
||||
copy-vendor:
|
||||
mage vendorUpdate
|
||||
mkdir -p vendor/github.com/elastic/
|
||||
cp -R ${GOPATH}/src/github.com/elastic/beats vendor/github.com/elastic/
|
||||
rm -rf vendor/github.com/elastic/beats/.git
|
||||
|
||||
.PHONY: git-init
|
||||
git-init:
|
||||
git init
|
||||
git add README.md CONTRIBUTING.md
|
||||
git commit -m "Initial commit"
|
||||
git add LICENSE
|
||||
git commit -m "Add the LICENSE"
|
||||
git add .gitignore
|
||||
git commit -m "Add git settings"
|
||||
git add .
|
||||
git reset -- .travis.yml
|
||||
git commit -m "Add icingabeat"
|
||||
git add .travis.yml
|
||||
git commit -m "Add Travis CI"
|
||||
|
||||
# This is called by the beats packer before building starts
|
||||
.PHONY: before-build
|
||||
before-build:
|
||||
|
||||
# Collects all dependencies and then calls update
|
||||
.PHONY: collect
|
||||
collect:
|
||||
|
@ -1 +0,0 @@
|
||||
This file only exists to make `make package` happy.
|
230
README.md
230
README.md
@ -12,12 +12,155 @@ Icingabeat is an [Elastic Beat](https://www.elastic.co/products/beats) that
|
||||
fetches data from the Icinga 2 API and sends it either directly to Elasticsearch
|
||||
or Logstash.
|
||||
|
||||

|
||||

|
||||
|
||||
## Documentation
|
||||
Please read the documentation on
|
||||
[icinga.com/docs/icingabeat/latest](https://www.icinga.com/docs/icingabeat/latest/)
|
||||
for more information
|
||||
## Eventstream
|
||||
|
||||
Receive an eventstream from the Icinga 2 API. This stream includes events such
|
||||
as checkresults, notifications, downtimes, acknowledgemts and many other types.
|
||||
See below for details. There is no polling involved when receiving an
|
||||
eventstream.
|
||||
|
||||
Example use cases:
|
||||
* Correlate monitoring data with logging information
|
||||
* Monitor notifications sent by Icinga 2
|
||||
|
||||
## Statuspoller
|
||||
|
||||
The Icinga 2 API exports a lot of information about the state of the Icinga
|
||||
daemon. Icingabeat can poll these information periodically.
|
||||
|
||||
Example use cases:
|
||||
* Visualize metrics of the Icinga 2 daemon
|
||||
* Get insights how each enable Icinga 2 feature performs
|
||||
* Information about zones and endpoints
|
||||
|
||||
|
||||
### Installation
|
||||
Download and install your package from the
|
||||
[latest release](https://github.com/Icinga/icingabeat/releases/latest) page.
|
||||
|
||||
### Configuration
|
||||
Configuration of Icingabeat is split into 3 sections: General, Evenstream and
|
||||
Statuspoller. On Linux configuration files are located at `/etc/icingabeat`
|
||||
|
||||
#### General
|
||||
Settings in this section apply to both modes.
|
||||
|
||||
##### `host`
|
||||
Hostname of Icinga 2 API. This can be either an IP address or domain.
|
||||
Defaults to `localhost`
|
||||
|
||||
##### `port`
|
||||
Defaults to `5665`
|
||||
|
||||
##### `user`
|
||||
Username to be used for the API connection. You need to create this user in your Icinga 2 configuration. Make sure that it has sufficient permissions to read the
|
||||
data you want to collect.
|
||||
|
||||
Here is an example of an API user in your Icinga 2 configuration:
|
||||
|
||||
```c++
|
||||
object ApiUser "icinga" {
|
||||
password = "icinga"
|
||||
permissions = ["events/*", "status/query"]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Learn more about the `ApiUser` and its permissions in the
|
||||
[Icinga 2 docs](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api#icinga2-api-permissions).
|
||||
|
||||
##### `password`
|
||||
Defaults to `icinga`
|
||||
|
||||
##### `skip_ssl_verify`
|
||||
Skip verification of SSL certificates. Defaults to `false`
|
||||
|
||||
#### Eventstream
|
||||
Settings in this section apply to the eventstream mode. To disable the
|
||||
eventstream completely, comment out the section.
|
||||
|
||||
##### `types`
|
||||
You can select which particular Icinga 2 events you want to receive and store.
|
||||
The following types are available, you must set at least one:
|
||||
|
||||
* `CheckResult`
|
||||
* `StateChange`
|
||||
* `Notification`
|
||||
* `AcknowledgementSet`
|
||||
* `AcknowledgementCleared`
|
||||
* `CommentAdded`
|
||||
* `CommentRemoved`
|
||||
* `DowntimeAdded`
|
||||
* `DowntimeRemoved`
|
||||
* `DowntimeStarted`
|
||||
* `DowntimeTriggered`
|
||||
|
||||
To set multiple types, do the following:
|
||||
|
||||
```yaml
|
||||
types:
|
||||
- CheckResult
|
||||
- StateChange
|
||||
- Notification
|
||||
- AcknowledgementSet
|
||||
- AcknowledgementCleared
|
||||
```
|
||||
|
||||
##### `filter`
|
||||
In addition to selecting the types of events, you can filter them by
|
||||
attributes using the prefix `event.`. By default no filter is set.
|
||||
|
||||
###### Examples
|
||||
|
||||
Only check results with the exit code 2:
|
||||
```yaml
|
||||
filter: "event.check_result.exit_status==2"
|
||||
```
|
||||
|
||||
Only check results of services that match `mysql*`:
|
||||
```yaml
|
||||
filter: 'match("mysql*", event.service)'
|
||||
```
|
||||
|
||||
##### `retry_interval`
|
||||
On a connection loss Icingabeat will try to reconnect to the API periodically.
|
||||
This setting defines the interval for connection retries. Defaults to `10s`
|
||||
|
||||
#### Statuspoller
|
||||
Settings of this section apply to the statuspoller mode.
|
||||
|
||||
##### `interval`
|
||||
Interval at which the status API is called. Set to `0` to disable polling.
|
||||
Defaults to `60s`
|
||||
|
||||
### Run
|
||||
|
||||
On Linux systems, use one of the following commands to start Icingabeat:
|
||||
|
||||
* `service icingabeat start` or
|
||||
* `systemctl icingabeat start` or
|
||||
* `/etc/init.d/icingabeat start`
|
||||
|
||||
## Dashboards
|
||||
We have dashboards prepared that you can use when getting started with
|
||||
Icingabeat. They are meant to give you some inspiration before you start
|
||||
exploring the data by yourself. Download the dashboards from the
|
||||
[latest release](https://github.com/Icinga/icingabeat/releases/latest) page.
|
||||
|
||||
**Note:** The dashboards require Kibana >= 5.2.0
|
||||
|
||||
The tool to import dashboards with is already included in the Icingabeat package.
|
||||
|
||||
```
|
||||
unzip icingabeat-dashboards-1.1.0.zip -d /tmp
|
||||
/usr/share/icingabeat/scripts/import_dashboards -dir /tmp/icingabeat-dashboards-1.1.0 -es http://127.0.0.1:9200
|
||||
```
|
||||
|
||||
## Fields
|
||||
Icingabeat exports a bunch of fields. Have a look to the
|
||||
[fields.asciidoc](docs/fields.asciidoc) for details.
|
||||
|
||||
## Development
|
||||
|
||||
@ -25,7 +168,7 @@ for more information
|
||||
|
||||
#### Requirements
|
||||
|
||||
* [Golang](https://golang.org/dl/) 1.16
|
||||
* [Golang](https://golang.org/dl/) 1.7
|
||||
|
||||
#### Clone
|
||||
|
||||
@ -47,7 +190,7 @@ To build the binary for Icingabeat run the command below. This will generate a
|
||||
binary in the same directory with the name icingabeat.
|
||||
|
||||
```shell
|
||||
mage build
|
||||
make
|
||||
```
|
||||
|
||||
#### Run
|
||||
@ -57,6 +200,49 @@ To run Icingabeat with debugging output enabled, run:
|
||||
./icingabeat -c icingabeat.yml -e -d "*"
|
||||
```
|
||||
|
||||
#### Test
|
||||
|
||||
To test Icingabeat, run the following command:
|
||||
|
||||
```shell
|
||||
make testsuite
|
||||
```
|
||||
|
||||
alternatively:
|
||||
```shell
|
||||
make unit-tests
|
||||
make system-tests
|
||||
make integration-tests
|
||||
make coverage-report
|
||||
```
|
||||
|
||||
The test coverage is reported in the folder `./build/coverage/`
|
||||
|
||||
#### Update
|
||||
|
||||
Each beat has a template for the mapping in elasticsearch and a documentation
|
||||
for the fields which is automatically generated based on `etc/fields.yml`.
|
||||
To generate etc/icingabeat.template.json and etc/icingabeat.asciidoc
|
||||
|
||||
```shell
|
||||
make update
|
||||
```
|
||||
|
||||
#### Cleanup
|
||||
|
||||
To clean Icingabeat source code, run the following commands:
|
||||
|
||||
```shell
|
||||
make fmt
|
||||
make simplify
|
||||
```
|
||||
|
||||
To clean up the build directory and generated artifacts, run:
|
||||
|
||||
```shell
|
||||
make clean
|
||||
```
|
||||
|
||||
### Packaging
|
||||
|
||||
The beat frameworks provides tools to crosscompile and package your beat for
|
||||
@ -65,9 +251,35 @@ vendoring as described above. To build packages of your beat, run the following
|
||||
command:
|
||||
|
||||
```shell
|
||||
export PLATFORMS="linux/amd64 linux/386"
|
||||
mage package
|
||||
make package
|
||||
```
|
||||
|
||||
This will fetch and create all images required for the build process. The whole
|
||||
process can take several minutes to finish.
|
||||
|
||||
To disable snapshot packages or build specific packages, set the following
|
||||
environment variables:
|
||||
|
||||
```shell
|
||||
export SNAPSHOT=false
|
||||
export TARGETS="\"linux/amd64 linux/386\""
|
||||
export PACKAGES=icingabeat/deb
|
||||
make package
|
||||
```
|
||||
|
||||
#### Dashboards
|
||||
To be able to export dashboards with all their dependencies (visualizations and
|
||||
searches) you have to name the dashboard with a `icingabeat-` prefix.
|
||||
|
||||
Export dashboards:
|
||||
```shell
|
||||
export ES_URL=http://127.0.0.1:9200
|
||||
make export-dashboards
|
||||
```
|
||||
|
||||
After exporting, dashboards can be packaged:
|
||||
|
||||
```shell
|
||||
export SNAPSHOT=false
|
||||
make package-dashboards
|
||||
```
|
||||
|
12
RELEASE.md
12
RELEASE.md
@ -14,7 +14,14 @@ git commit -am "Update AUTHORS"
|
||||
## 2. Changelog
|
||||
Update [CHANGELOG.md] with all relevant information.
|
||||
|
||||
## 3. Build
|
||||
## 3. Version
|
||||
Version numbers are incremented regarding the [SemVer 1.0.0] specification.
|
||||
Update the version number in the following files:
|
||||
|
||||
* `version.yml`
|
||||
* `vendor/github.com/elastic/beats/dev-tools/packer/version.yml`
|
||||
|
||||
## 4. Build
|
||||
Build packages:
|
||||
|
||||
``` bash
|
||||
@ -29,7 +36,7 @@ export SNAPSHOT=false
|
||||
make package-dashboards
|
||||
```
|
||||
|
||||
## 4. Git Tag
|
||||
## 5. Git Tag
|
||||
Commit all changes to the `master` branch
|
||||
|
||||
``` bash
|
||||
@ -49,6 +56,7 @@ Push tags
|
||||
git push --tags
|
||||
```
|
||||
|
||||
[SemVer 1.0.0]: http://semver.org/spec/v1.0.0.html
|
||||
[CHANGELOG.md]: CHANGELOG.md
|
||||
[AUTHORS]: AUTHORS
|
||||
[.mailmap]: .mailmap
|
||||
|
@ -16,58 +16,48 @@ icingabeat:
|
||||
# Password of the user
|
||||
password: "icinga"
|
||||
|
||||
# Configure SSL verification. If `false` is configured, all server hosts
|
||||
# and certificates will be accepted. In this mode, SSL based connections are
|
||||
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
|
||||
# `true`.
|
||||
ssl.verify: true
|
||||
# Skip SSL verification
|
||||
skip_ssl_verify: false
|
||||
|
||||
# List of root certificates for HTTPS server verifications
|
||||
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||
|
||||
########################### Icingabeat Eventstream ##########################
|
||||
#
|
||||
# Icingabeat supports capturing of an evenstream and periodical polling of the
|
||||
# Icinga status data.
|
||||
eventstream:
|
||||
#
|
||||
# Decide which events to receive from the event stream.
|
||||
# The following event stream types are available:
|
||||
#
|
||||
# * CheckResult
|
||||
# * StateChange
|
||||
# * Notification
|
||||
# * AcknowledgementSet
|
||||
# * AcknowledgementCleared
|
||||
# * CommentAdded
|
||||
# * CommentRemoved
|
||||
# * DowntimeAdded
|
||||
# * DowntimeRemoved
|
||||
# * DowntimeStarted
|
||||
# * DowntimeTriggered
|
||||
#
|
||||
# To disable eventstream, leave the types empty or comment out the option
|
||||
types:
|
||||
- CheckResult
|
||||
- StateChange
|
||||
|
||||
# Decide which events to receive from the event stream.
|
||||
# The following event stream types are available:
|
||||
#
|
||||
# * CheckResult
|
||||
# * StateChange
|
||||
# * Notification
|
||||
# * AcknowledgementSet
|
||||
# * AcknowledgementCleared
|
||||
# * CommentAdded
|
||||
# * CommentRemoved
|
||||
# * DowntimeAdded
|
||||
# * DowntimeRemoved
|
||||
# * DowntimeStarted
|
||||
# * DowntimeTriggered
|
||||
#
|
||||
# To disable eventstream, leave the types empty or comment out the option
|
||||
eventstream.types:
|
||||
- CheckResult
|
||||
- StateChange
|
||||
|
||||
# Event streams can be filtered by attributes using the prefix 'event.'
|
||||
#
|
||||
# Example for the CheckResult type with the exit_code set to 2:
|
||||
# filter: "event.check_result.exit_status==2"
|
||||
#
|
||||
# Example for the CheckResult type with the service matching the string
|
||||
# pattern "mysql*":
|
||||
# filter: 'match("mysql*", event.service)'
|
||||
#
|
||||
# To disable filtering set an empty string or comment out the filter option
|
||||
eventstream.filter: ""
|
||||
# Event streams can be filtered by attributes using the prefix 'event.'
|
||||
#
|
||||
# Example for the CheckResult type with the exit_code set to 2:
|
||||
# filter: "event.check_result.exit_status==2"
|
||||
#
|
||||
# Example for the CheckResult type with the service matching the string
|
||||
# pattern "mysql*":
|
||||
# filter: 'match("mysql*", event.service)'
|
||||
#
|
||||
# To disable filtering set an empty string or comment out the filter option
|
||||
filter: ""
|
||||
|
||||
# Defines how fast to reconnect to the API on connection loss
|
||||
eventstream.retry_interval: 10s
|
||||
retry_interval: 10s
|
||||
|
||||
########################### Icingabeat Statuspoller #########################
|
||||
#
|
||||
# Icingabeat can collect status information about Icinga 2 periodically. Set
|
||||
# an interval at which the status API should be called. Set to 0 to disable
|
||||
# polling.
|
||||
statuspoller.interval: 60s
|
||||
statuspoller:
|
||||
# Interval at which the status API is called. Set to 0 to disable polling.
|
||||
interval: 60s
|
||||
|
@ -1,73 +0,0 @@
|
||||
################### Icingabeat Configuration Example #########################
|
||||
|
||||
############################# Icingabeat ######################################
|
||||
|
||||
icingabeat:
|
||||
|
||||
# Defines the Icinga API endpoint
|
||||
host: "localhost"
|
||||
|
||||
# Defines the port of the API endpoint
|
||||
port: 5665
|
||||
|
||||
# A user with sufficient permissions
|
||||
user: "icinga"
|
||||
|
||||
# Password of the user
|
||||
password: "icinga"
|
||||
|
||||
# Configure SSL verification. If `false` is configured, all server hosts
|
||||
# and certificates will be accepted. In this mode, SSL based connections are
|
||||
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
|
||||
# `true`.
|
||||
ssl.verify: true
|
||||
|
||||
# List of root certificates for HTTPS server verifications
|
||||
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||
|
||||
########################### Icingabeat Eventstream ##########################
|
||||
#
|
||||
# Icingabeat supports capturing of an evenstream and periodical polling of the
|
||||
# Icinga status data.
|
||||
|
||||
# Decide which events to receive from the event stream.
|
||||
# The following event stream types are available:
|
||||
#
|
||||
# * CheckResult
|
||||
# * StateChange
|
||||
# * Notification
|
||||
# * AcknowledgementSet
|
||||
# * AcknowledgementCleared
|
||||
# * CommentAdded
|
||||
# * CommentRemoved
|
||||
# * DowntimeAdded
|
||||
# * DowntimeRemoved
|
||||
# * DowntimeStarted
|
||||
# * DowntimeTriggered
|
||||
#
|
||||
# To disable eventstream, leave the types empty or comment out the option
|
||||
eventstream.types:
|
||||
- CheckResult
|
||||
- StateChange
|
||||
|
||||
# Event streams can be filtered by attributes using the prefix 'event.'
|
||||
#
|
||||
# Example for the CheckResult type with the exit_code set to 2:
|
||||
# filter: "event.check_result.exit_status==2"
|
||||
#
|
||||
# Example for the CheckResult type with the service matching the string
|
||||
# pattern "mysql*":
|
||||
# filter: 'match("mysql*", event.service)'
|
||||
#
|
||||
# To disable filtering set an empty string or comment out the filter option
|
||||
eventstream.filter: ""
|
||||
|
||||
# Defines how fast to reconnect to the API on connection loss
|
||||
eventstream.retry_interval: 10s
|
||||
|
||||
########################### Icingabeat Statuspoller #########################
|
||||
#
|
||||
# Icingabeat can collect status information about Icinga 2 periodically. Set
|
||||
# an interval at which the status API should be called. Set to 0 to disable
|
||||
# polling.
|
||||
statuspoller.interval: 60s
|
1348
_meta/fields.yml
1348
_meta/fields.yml
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,13 @@
|
||||
{
|
||||
"hits": 0,
|
||||
"timeRestore": false,
|
||||
"description": "",
|
||||
"title": "icingabeat-checkresults",
|
||||
"uiStateJSON": "{}",
|
||||
"panelsJSON": "[{\"size_x\":12,\"size_y\":3,\"panelIndex\":1,\"type\":\"visualization\",\"id\":\"9631be10-0977-11e7-a4dd-e96fa284b426\",\"col\":1,\"row\":1},{\"size_x\":3,\"size_y\":6,\"panelIndex\":2,\"type\":\"visualization\",\"id\":\"d50bb810-0978-11e7-a4dd-e96fa284b426\",\"col\":1,\"row\":4},{\"size_x\":4,\"size_y\":6,\"panelIndex\":3,\"type\":\"visualization\",\"id\":\"df437df0-0977-11e7-a4dd-e96fa284b426\",\"col\":4,\"row\":4},{\"size_x\":5,\"size_y\":6,\"panelIndex\":4,\"type\":\"visualization\",\"id\":\"cf643aa0-0977-11e7-a4dd-e96fa284b426\",\"col\":8,\"row\":4}]",
|
||||
"optionsJSON": "{\"darkTheme\":false}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}}}]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,13 @@
|
||||
{
|
||||
"hits": 0,
|
||||
"timeRestore": false,
|
||||
"description": "",
|
||||
"title": "icingabeat-notifications",
|
||||
"uiStateJSON": "{}",
|
||||
"panelsJSON": "[{\"size_x\":12,\"size_y\":2,\"panelIndex\":1,\"type\":\"visualization\",\"id\":\"face9fb0-0b1a-11e7-ad60-c7e10cd34b29\",\"col\":1,\"row\":1},{\"size_x\":3,\"size_y\":2,\"panelIndex\":2,\"type\":\"visualization\",\"id\":\"e3813340-0b1a-11e7-ad60-c7e10cd34b29\",\"col\":1,\"row\":3},{\"size_x\":3,\"size_y\":2,\"panelIndex\":3,\"type\":\"visualization\",\"id\":\"cf7c1400-0b1a-11e7-ad60-c7e10cd34b29\",\"col\":4,\"row\":3},{\"size_x\":3,\"size_y\":2,\"panelIndex\":4,\"type\":\"visualization\",\"id\":\"d8a29b80-0b1a-11e7-ad60-c7e10cd34b29\",\"col\":7,\"row\":3},{\"size_x\":3,\"size_y\":2,\"panelIndex\":5,\"type\":\"visualization\",\"id\":\"6a6a66b0-0b1b-11e7-ad60-c7e10cd34b29\",\"col\":10,\"row\":3},{\"size_x\":12,\"size_y\":5,\"panelIndex\":6,\"type\":\"search\",\"id\":\"9b1ca350-0b1a-11e7-ad60-c7e10cd34b29\",\"col\":1,\"row\":5,\"columns\":[\"host\",\"service\",\"notification_type\",\"text\",\"users\"],\"sort\":[\"@timestamp\",\"desc\"]}]",
|
||||
"optionsJSON": "{\"darkTheme\":false}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}}}]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,13 @@
|
||||
{
|
||||
"hits": 0,
|
||||
"timeRestore": false,
|
||||
"description": "",
|
||||
"title": "icingabeat-status",
|
||||
"uiStateJSON": "{}",
|
||||
"panelsJSON": "[{\"col\":1,\"id\":\"bef57c00-098c-11e7-85fa-5daf7284b188\",\"panelIndex\":1,\"row\":1,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"col\":9,\"id\":\"1b2324e0-0993-11e7-a836-f12bdc7df120\",\"panelIndex\":5,\"row\":3,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"col\":1,\"id\":\"47061440-0994-11e7-828e-2b8b7d3da4e9\",\"panelIndex\":6,\"row\":3,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"col\":5,\"id\":\"7bf45770-0994-11e7-862d-53e4526068ac\",\"panelIndex\":7,\"row\":3,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"col\":9,\"id\":\"bdd29b50-0996-11e7-862d-53e4526068ac\",\"panelIndex\":8,\"row\":5,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"col\":5,\"id\":\"dfb21700-0a63-11e7-a96b-35f342c9d63d\",\"panelIndex\":9,\"row\":5,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"col\":1,\"id\":\"4326e7c0-0a64-11e7-a96b-35f342c9d63d\",\"panelIndex\":10,\"row\":5,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"col\":5,\"id\":\"45556270-0b12-11e7-ad60-c7e10cd34b29\",\"panelIndex\":11,\"row\":1,\"size_x\":2,\"size_y\":2,\"type\":\"visualization\"},{\"col\":7,\"id\":\"e7912c50-0b11-11e7-ad60-c7e10cd34b29\",\"panelIndex\":12,\"row\":1,\"size_x\":3,\"size_y\":2,\"type\":\"visualization\"},{\"col\":10,\"id\":\"27ed7600-0b12-11e7-ad60-c7e10cd34b29\",\"panelIndex\":13,\"row\":1,\"size_x\":3,\"size_y\":2,\"type\":\"visualization\"}]",
|
||||
"optionsJSON": "{\"darkTheme\":false}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}}}]}"
|
||||
}
|
||||
}
|
6
_meta/kibana/index-pattern/icingabeat.json
Normal file
6
_meta/kibana/index-pattern/icingabeat.json
Normal file
File diff suppressed because one or more lines are too long
@ -0,0 +1,16 @@
|
||||
{
|
||||
"sort": [
|
||||
"@timestamp",
|
||||
"desc"
|
||||
],
|
||||
"hits": 0,
|
||||
"description": "",
|
||||
"title": "CheckResults",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"icingabeat-*\",\"query\":{\"query_string\":{\"query\":\"type:icingabeat.event.checkresult\",\"analyze_wildcard\":true}},\"filter\":[],\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"require_field_match\":false,\"fragment_size\":2147483647}}"
|
||||
},
|
||||
"columns": [
|
||||
"_source"
|
||||
]
|
||||
}
|
@ -0,0 +1,16 @@
|
||||
{
|
||||
"sort": [
|
||||
"@timestamp",
|
||||
"desc"
|
||||
],
|
||||
"hits": 0,
|
||||
"description": "",
|
||||
"title": "Statuspoller",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"icingabeat-*\",\"filter\":[],\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"require_field_match\":false,\"fragment_size\":2147483647},\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"type:icingabeat.status*\"}}}"
|
||||
},
|
||||
"columns": [
|
||||
"_source"
|
||||
]
|
||||
}
|
@ -0,0 +1,20 @@
|
||||
{
|
||||
"sort": [
|
||||
"@timestamp",
|
||||
"desc"
|
||||
],
|
||||
"hits": 0,
|
||||
"description": "",
|
||||
"title": "Notifications",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"icingabeat-*\",\"filter\":[],\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"require_field_match\":false,\"fragment_size\":2147483647},\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"type:icingabeat.event.notification\"}}}"
|
||||
},
|
||||
"columns": [
|
||||
"host",
|
||||
"service",
|
||||
"notification_type",
|
||||
"text",
|
||||
"users"
|
||||
]
|
||||
}
|
@ -0,0 +1,10 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Endpoints comparisson\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.api.num_endpoints').label(\\\"Endpoints\\\"), .es(metric='avg:status.api.num_not_conn_endpoints').label(\\\"Endpoints not connected\\\").title(\\\"Connected Endpoints\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Endpoints comparisson",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Nodes\",\"type\":\"pie\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"status.icingaapplication.app.node_name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Nodes",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "5570eb90-098a-11e7-9f17-cf3a85e0d1dc",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,10 @@
|
||||
{
|
||||
"visState": "{\"title\":\"States of Hosts\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.num_hosts_up').color(#3EC8AC).label(\\\"Up\\\").title(\\\"States of Hosts\\\"),.es(metric='avg:status.num_hosts_down').color(#E94822).label(\\\"Down\\\"),.es(metric='avg:status.num_hosts_unreachable').color(#6E60A0).label(\\\"Unreachable\\\")\",\"interval\":\"1m\"},\"aggs\":[],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "States of Hosts",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Icinga Version\",\"type\":\"pie\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"status.icingaapplication.app.version\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Icinga Version",
|
||||
"uiStateJSON": "{\"vis\":{\"legendOpen\":true}}",
|
||||
"version": 1,
|
||||
"savedSearchId": "5570eb90-098a-11e7-9f17-cf3a85e0d1dc",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,10 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Hostchecks by time\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.active_host_checks_1min').color(#070F4E).label(\\\"1 min\\\").title(\\\"Amount of Hostchecks\\\"),.es(metric='avg:status.active_host_checks_5min').color(#2772DB).label(\\\"5 min\\\"),.es(metric='avg:status.active_host_checks_15min').color(#3AB1C8).label(\\\"15 min\\\")\",\"interval\":\"auto\"},\"aggs\":[],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Hostchecks by time",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Notifications by User\",\"type\":\"pie\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"segment\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"users:bob\",\"analyze_wildcard\":true}}},\"label\":\"Bob\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"users:on-call\",\"analyze_wildcard\":true}}},\"label\":\"On-Call\"}]}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Notifications by User",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "9b1ca350-0b1a-11e7-ad60-c7e10cd34b29",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,10 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Servicechecks by time\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.active_service_checks_1min').color(#070F4E).label(\\\"1 min\\\").title(\\\"Amount of Servicechecks\\\"),.es(metric='avg:status.active_service_checks_5min').color(#2772DB).label(\\\"5 min\\\"),.es(metric='avg:status.active_service_checks_15min').color(#3AB1C8).label(\\\"15 min\\\")\",\"interval\":\"1m\"},\"aggs\":[],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Servicechecks by time",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"CheckResults by State\",\"type\":\"histogram\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"scale\":\"linear\",\"mode\":\"stacked\",\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":false,\"setYExtents\":false,\"orderBucketsBySum\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":{\"query_string\":{\"query\":\"check_result.state:0\",\"analyze_wildcard\":true}}},\"label\":\"0: OK\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"check_result.state:1\",\"analyze_wildcard\":true}}},\"label\":\"1: Warning\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"check_result.state:2\",\"analyze_wildcard\":true}}},\"label\":\"2: Critical\"},{\"input\":{\"query\":{\"query_string\":{\"query\":\"check_result.state:3\",\"analyze_wildcard\":true}}},\"label\":\"3: Unknown\"}]}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "CheckResults by State",
|
||||
"uiStateJSON": "{\"vis\":{\"colors\":{\"0\":\"#629E51\",\"1\":\"#E5AC0E\",\"2\":\"#BF1B00\",\"Ok\":\"#508642\",\"Critical\":\"#BF1B00\",\"Warning\":\"#EAB839\",\"Unknown\":\"#962D82\",\"0: OK\":\"#629E51\",\"1: Warning\":\"#E5AC0E\",\"2: Critical\":\"#BF1B00\",\"3: Unknown\":\"#962D82\"}}}",
|
||||
"version": 1,
|
||||
"savedSearchId": "5091de50-0975-11e7-a4dd-e96fa284b426",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,10 @@
|
||||
{
|
||||
"visState": "{\"title\":\"MySQL Queries\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:perfdata.idomysqlconnection_ido-mysql_queries_1min.value').color(#616EEF).label(\\\"1 min\\\").title(\\\"MySQL Queries\\\"), .es(metric='avg:perfdata.idomysqlconnection_ido-mysql_queries_5mins.value').color(#09A8FA).label(\\\"5 min\\\"), .es(metric='avg:perfdata.idomysqlconnection_ido-mysql_queries_15mins.value').color(#41C5D3).label(\\\"15 min\\\")\",\"interval\":\"1m\"},\"aggs\":[],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "MySQL Queries",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,10 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Icinga Logo\",\"type\":\"markdown\",\"params\":{\"markdown\":\"\"},\"aggs\":[],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Icinga Logo",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Services Tag Cloud\",\"type\":\"tagcloud\",\"params\":{\"scale\":\"linear\",\"orientation\":\"single\",\"minFontSize\":18,\"maxFontSize\":72},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"service\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Services\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Services Tag Cloud",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "5091de50-0975-11e7-a4dd-e96fa284b426",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Notification Services\",\"type\":\"pie\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"service\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Notification Services",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "9b1ca350-0b1a-11e7-ad60-c7e10cd34b29",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"CheckResult Count\",\"type\":\"metric\",\"params\":{\"handleNoResults\":true,\"fontSize\":60},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"CheckResults received\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "CheckResult Count",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "5091de50-0975-11e7-a4dd-e96fa284b426",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Notification Hosts\",\"type\":\"pie\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"host\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Notification Hosts",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "9b1ca350-0b1a-11e7-ad60-c7e10cd34b29",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Hosts Tag Cloud\",\"type\":\"tagcloud\",\"params\":{\"scale\":\"linear\",\"orientation\":\"single\",\"minFontSize\":18,\"maxFontSize\":72},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"host\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Hosts\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Hosts Tag Cloud",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "5091de50-0975-11e7-a4dd-e96fa284b426",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,10 @@
|
||||
{
|
||||
"visState": "{\"title\":\"States of Services\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.num_services_ok').color(#3EC8AC).label(\\\"Ok\\\").title(\\\"States of Services\\\"),.es(metric='avg:status.num_services_warning').color(#F2910A).label(\\\"Warning\\\"),.es(metric='avg:status.num_services_critical').color(#E94822).label(\\\"Critical\\\")\",\"interval\":\"1m\"},\"aggs\":[],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "States of Services",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Notification Types\",\"type\":\"pie\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"notification_type\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Notification Types",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "9b1ca350-0b1a-11e7-ad60-c7e10cd34b29",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"MySQL Schema Version\",\"type\":\"pie\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"status.idomysqlconnection.ido-mysql.version\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "MySQL Schema Version",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "5570eb90-098a-11e7-9f17-cf3a85e0d1dc",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"visState": "{\"title\":\"Notification Types\",\"type\":\"histogram\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"scale\":\"linear\",\"mode\":\"stacked\",\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":false,\"setYExtents\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"notification_type\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
|
||||
"description": "",
|
||||
"title": "Notification Types",
|
||||
"uiStateJSON": "{}",
|
||||
"version": 1,
|
||||
"savedSearchId": "9b1ca350-0b1a-11e7-ad60-c7e10cd34b29",
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"filter\":[]}"
|
||||
}
|
||||
}
|
@ -12,9 +12,8 @@ import (
|
||||
|
||||
"github.com/icinga/icingabeat/config"
|
||||
|
||||
"github.com/elastic/beats/v7/libbeat/beat"
|
||||
"github.com/elastic/beats/v7/libbeat/common"
|
||||
"github.com/elastic/beats/v7/libbeat/logp"
|
||||
"github.com/elastic/beats/libbeat/common"
|
||||
"github.com/elastic/beats/libbeat/logp"
|
||||
)
|
||||
|
||||
// Eventstream type
|
||||
@ -38,52 +37,53 @@ func NewEventstream(bt *Icingabeat, cfg config.Config) *Eventstream {
|
||||
}
|
||||
|
||||
// BuildEventstreamEvent ...
|
||||
func BuildEventstreamEvent(e []byte) beat.Event {
|
||||
func BuildEventstreamEvent(e []byte) common.MapStr {
|
||||
|
||||
var event beat.Event
|
||||
var event common.MapStr
|
||||
var icingaEvent map[string]interface{}
|
||||
|
||||
if err := json.Unmarshal(e, &icingaEvent); err != nil {
|
||||
logp.Warn("Error decoding event: %v", err)
|
||||
}
|
||||
|
||||
event.Timestamp = time.Now()
|
||||
event.Fields = common.MapStr{}
|
||||
event = common.MapStr{
|
||||
"@timestamp": common.Time(time.Now()),
|
||||
}
|
||||
|
||||
for key, value := range icingaEvent {
|
||||
event.Fields.Put(target_key+key, value)
|
||||
event.Put(key, value)
|
||||
}
|
||||
|
||||
logp.Debug("icingabeat", "Type: %v", icingaEvent["type"])
|
||||
switch icingaEvent["type"] {
|
||||
case "CheckResult", "StateChange", "Notification":
|
||||
checkResult := icingaEvent["check_result"].(map[string]interface{})
|
||||
event.Fields.Put(target_key+"check_result.execution_start", FloatToTimestamp(checkResult["execution_start"].(float64)))
|
||||
event.Fields.Put(target_key+"check_result.execution_end", FloatToTimestamp(checkResult["execution_end"].(float64)))
|
||||
event.Fields.Put(target_key+"check_result.schedule_start", FloatToTimestamp(checkResult["schedule_start"].(float64)))
|
||||
event.Fields.Put(target_key+"check_result.schedule_end", FloatToTimestamp(checkResult["schedule_end"].(float64)))
|
||||
event.Fields.Delete(target_key + "check_result.performance_data")
|
||||
event.Put("check_result.execution_start", FloatToTimestamp(checkResult["execution_start"].(float64)))
|
||||
event.Put("check_result.execution_end", FloatToTimestamp(checkResult["execution_end"].(float64)))
|
||||
event.Put("check_result.schedule_start", FloatToTimestamp(checkResult["schedule_start"].(float64)))
|
||||
event.Put("check_result.schedule_end", FloatToTimestamp(checkResult["schedule_end"].(float64)))
|
||||
event.Delete("check_result.performance_data")
|
||||
|
||||
case "AcknowledgementSet":
|
||||
event.Delete("comment")
|
||||
event.Fields.Put(target_key+"comment.text", icingaEvent["comment"])
|
||||
event.Fields.Put(target_key+"expiry", FloatToTimestamp(icingaEvent["expiry"].(float64)))
|
||||
event.Put("comment.text", icingaEvent["comment"])
|
||||
event.Put("expiry", FloatToTimestamp(icingaEvent["expiry"].(float64)))
|
||||
|
||||
case "CommentAdded", "CommentRemoved":
|
||||
comment := icingaEvent["comment"].(map[string]interface{})
|
||||
event.Fields.Put(target_key+"comment.entry_time", FloatToTimestamp(comment["entry_time"].(float64)))
|
||||
event.Fields.Put(target_key+"comment.expire_time", FloatToTimestamp(comment["expire_time"].(float64)))
|
||||
event.Put("comment.entry_time", FloatToTimestamp(comment["entry_time"].(float64)))
|
||||
event.Put("comment.expire_time", FloatToTimestamp(comment["expire_time"].(float64)))
|
||||
|
||||
case "DowntimeAdded", "DowntimeRemoved", "DowntimeStarted", "DowntimeTriggered":
|
||||
downtime := icingaEvent["downtime"].(map[string]interface{})
|
||||
event.Fields.Put(target_key+"downtime.end_time", FloatToTimestamp(downtime["end_time"].(float64)))
|
||||
event.Fields.Put(target_key+"downtime.entry_time", FloatToTimestamp(downtime["entry_time"].(float64)))
|
||||
event.Fields.Put(target_key+"downtime.start_time", FloatToTimestamp(downtime["start_time"].(float64)))
|
||||
event.Fields.Put(target_key+"downtime.trigger_time", FloatToTimestamp(downtime["trigger_time"].(float64)))
|
||||
event.Put("downtime.end_time", FloatToTimestamp(downtime["end_time"].(float64)))
|
||||
event.Put("downtime.entry_time", FloatToTimestamp(downtime["entry_time"].(float64)))
|
||||
event.Put("downtime.start_time", FloatToTimestamp(downtime["start_time"].(float64)))
|
||||
event.Put("downtime.trigger_time", FloatToTimestamp(downtime["trigger_time"].(float64)))
|
||||
}
|
||||
|
||||
event.Fields.Put("type", "icingabeat.event."+strings.ToLower(icingaEvent["type"].(string)))
|
||||
event.Fields.Put(target_key+"timestamp", FloatToTimestamp(icingaEvent["timestamp"].(float64)))
|
||||
event.Put("type", "icingabeat.event."+strings.ToLower(icingaEvent["type"].(string)))
|
||||
event.Put("timestamp", FloatToTimestamp(icingaEvent["timestamp"].(float64)))
|
||||
|
||||
return event
|
||||
}
|
||||
@ -147,7 +147,7 @@ func (es *Eventstream) Run() error {
|
||||
logp.Err("Error reading line %#v", err)
|
||||
}
|
||||
|
||||
es.icingabeat.client.Publish(BuildEventstreamEvent(line))
|
||||
es.icingabeat.client.PublishEvent(BuildEventstreamEvent(line))
|
||||
logp.Debug("icingabeat.eventstream", "Event sent")
|
||||
}
|
||||
|
||||
@ -162,7 +162,6 @@ func (es *Eventstream) Run() error {
|
||||
|
||||
select {
|
||||
case <-es.done:
|
||||
defer response.Body.Close()
|
||||
return nil
|
||||
case <-ticker.C:
|
||||
}
|
||||
|
@ -2,44 +2,16 @@ package beater
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"time"
|
||||
|
||||
"github.com/elastic/beats/v7/libbeat/logp"
|
||||
"github.com/elastic/beats/libbeat/logp"
|
||||
)
|
||||
|
||||
func requestURL(bt *Icingabeat, method string, URL *url.URL) (*http.Response, error) {
|
||||
|
||||
var skipSslVerify bool
|
||||
certPool := x509.NewCertPool()
|
||||
|
||||
if bt.config.SSL.Verify {
|
||||
skipSslVerify = false
|
||||
|
||||
for _, ca := range bt.config.SSL.CertificateAuthorities {
|
||||
cert, err := ioutil.ReadFile(ca)
|
||||
if err != nil {
|
||||
logp.Warn("Could not load certificate: %v", err)
|
||||
}
|
||||
certPool.AppendCertsFromPEM(cert)
|
||||
}
|
||||
} else {
|
||||
skipSslVerify = true
|
||||
}
|
||||
|
||||
tlsConfig := &tls.Config{
|
||||
InsecureSkipVerify: skipSslVerify,
|
||||
RootCAs: certPool,
|
||||
}
|
||||
|
||||
transport := &http.Transport{
|
||||
TLSClientConfig: tlsConfig,
|
||||
MaxIdleConns: 10,
|
||||
IdleConnTimeout: 30 * time.Second,
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: bt.config.SkipSSLVerify},
|
||||
}
|
||||
|
||||
client := &http.Client{
|
||||
@ -65,10 +37,6 @@ func requestURL(bt *Icingabeat, method string, URL *url.URL) (*http.Response, er
|
||||
switch response.StatusCode {
|
||||
case 401:
|
||||
err = errors.New("Authentication failed for user " + bt.config.User)
|
||||
defer response.Body.Close()
|
||||
case 404:
|
||||
err = errors.New("404 Not Found. Missing permissions may be a reason for this.")
|
||||
defer response.Body.Close()
|
||||
}
|
||||
|
||||
return response, err
|
||||
|
@ -3,9 +3,10 @@ package beater
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/elastic/beats/v7/libbeat/beat"
|
||||
"github.com/elastic/beats/v7/libbeat/common"
|
||||
"github.com/elastic/beats/v7/libbeat/logp"
|
||||
"github.com/elastic/beats/libbeat/beat"
|
||||
"github.com/elastic/beats/libbeat/common"
|
||||
"github.com/elastic/beats/libbeat/logp"
|
||||
"github.com/elastic/beats/libbeat/publisher"
|
||||
|
||||
"github.com/icinga/icingabeat/config"
|
||||
)
|
||||
@ -14,11 +15,9 @@ import (
|
||||
type Icingabeat struct {
|
||||
done chan struct{}
|
||||
config config.Config
|
||||
client beat.Client
|
||||
client publisher.Client
|
||||
}
|
||||
|
||||
var target_key = "icinga."
|
||||
|
||||
// New beater
|
||||
func New(b *beat.Beat, cfg *common.Config) (beat.Beater, error) {
|
||||
config := config.DefaultConfig
|
||||
@ -36,12 +35,7 @@ func New(b *beat.Beat, cfg *common.Config) (beat.Beater, error) {
|
||||
// Run Icingabeat
|
||||
func (bt *Icingabeat) Run(b *beat.Beat) error {
|
||||
logp.Info("icingabeat is running! Hit CTRL-C to stop it.")
|
||||
|
||||
var err error
|
||||
bt.client, err = b.Publisher.Connect()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
bt.client = b.Publisher.Connect()
|
||||
|
||||
if len(bt.config.Eventstream.Types) > 0 {
|
||||
var eventstream *Eventstream
|
||||
|
@ -10,9 +10,8 @@ import (
|
||||
|
||||
"github.com/icinga/icingabeat/config"
|
||||
|
||||
"github.com/elastic/beats/v7/libbeat/beat"
|
||||
"github.com/elastic/beats/v7/libbeat/common"
|
||||
"github.com/elastic/beats/v7/libbeat/logp"
|
||||
"github.com/elastic/beats/libbeat/common"
|
||||
"github.com/elastic/beats/libbeat/logp"
|
||||
)
|
||||
|
||||
// Statuspoller type
|
||||
@ -34,8 +33,8 @@ func NewStatuspoller(bt *Icingabeat, cfg config.Config) *Statuspoller {
|
||||
}
|
||||
|
||||
// BuildStatusEvents ...
|
||||
func BuildStatusEvents(body []byte) []beat.Event {
|
||||
var statusEvents []beat.Event
|
||||
func BuildStatusEvents(body []byte) []common.MapStr {
|
||||
var statusEvents []common.MapStr
|
||||
var icingaStatus map[string]interface{}
|
||||
|
||||
if err := json.Unmarshal(body, &icingaStatus); err != nil {
|
||||
@ -45,9 +44,7 @@ func BuildStatusEvents(body []byte) []beat.Event {
|
||||
for _, result := range icingaStatus {
|
||||
for _, status := range result.([]interface{}) {
|
||||
|
||||
var event beat.Event
|
||||
event.Fields = common.MapStr{}
|
||||
event.Timestamp = time.Now()
|
||||
event := common.MapStr{}
|
||||
for key, value := range status.(map[string]interface{}) {
|
||||
|
||||
switch key {
|
||||
@ -56,20 +53,12 @@ func BuildStatusEvents(body []byte) []beat.Event {
|
||||
switch statusvalue.(type) {
|
||||
case map[string]interface{}:
|
||||
if len(statusvalue.(map[string]interface{})) > 0 {
|
||||
for key, value := range value.(map[string]interface{}) {
|
||||
if key == "api" {
|
||||
// "zones" can include a massive amount of data, depending
|
||||
// on the number of connected agents and satellites
|
||||
// since enough data is included in other keys, we're
|
||||
// removing "zones" explicitly
|
||||
delete(value.(map[string]interface{}), "zones")
|
||||
}
|
||||
}
|
||||
event.Fields.Put(target_key+key, value)
|
||||
|
||||
event.Put(key, value)
|
||||
}
|
||||
|
||||
default:
|
||||
event.Fields.Put(target_key+key, value)
|
||||
event.Put(key, value)
|
||||
}
|
||||
|
||||
}
|
||||
@ -83,21 +72,23 @@ func BuildStatusEvents(body []byte) []beat.Event {
|
||||
case interface{}:
|
||||
key = "perfdata." + perfdata.(map[string]interface{})["label"].(string)
|
||||
value = perfdata
|
||||
event.Fields.Put(target_key+key, value)
|
||||
event.Put(key, value)
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
case "name":
|
||||
key = "type"
|
||||
value = "icingabeat.status." + strings.ToLower(value.(string))
|
||||
event.Fields.Put("type", value)
|
||||
event.Put(key, value)
|
||||
|
||||
default:
|
||||
event.Fields.Put(target_key+key, value)
|
||||
event.Put(key, value)
|
||||
}
|
||||
}
|
||||
|
||||
if statusAvailable, _ := event.Fields.HasKey(target_key + "status"); statusAvailable == true {
|
||||
if statusAvailable, _ := event.HasKey("status"); statusAvailable == true {
|
||||
event.Put("@timestamp", common.Time(time.Now()))
|
||||
statusEvents = append(statusEvents, event)
|
||||
}
|
||||
}
|
||||
@ -129,7 +120,7 @@ func (sp *Statuspoller) Run() error {
|
||||
}
|
||||
|
||||
processedStatusEvents := BuildStatusEvents(body)
|
||||
sp.icingabeat.client.PublishAll(processedStatusEvents)
|
||||
sp.icingabeat.client.PublishEvents(processedStatusEvents)
|
||||
logp.Debug("icingabeat.statuspoller", "Events sent: %v", len(processedStatusEvents))
|
||||
|
||||
} else {
|
||||
@ -138,11 +129,9 @@ func (sp *Statuspoller) Run() error {
|
||||
|
||||
select {
|
||||
case <-sp.done:
|
||||
defer response.Body.Close()
|
||||
return nil
|
||||
case <-ticker.C:
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
|
14
cmd/root.go
14
cmd/root.go
@ -1,14 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/icinga/icingabeat/beater"
|
||||
|
||||
cmd "github.com/elastic/beats/v7/libbeat/cmd"
|
||||
"github.com/elastic/beats/v7/libbeat/cmd/instance"
|
||||
)
|
||||
|
||||
// Name of this beat
|
||||
var Name = "icingabeat"
|
||||
|
||||
// RootCmd to handle beats cli
|
||||
var RootCmd = cmd.GenRootCmdWithSettings(beater.New, instance.Settings{Name: Name})
|
@ -3,25 +3,17 @@
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
import "time"
|
||||
|
||||
// Config options
|
||||
type Config struct {
|
||||
Host string `config:"host"`
|
||||
Port int `config:"port"`
|
||||
User string `config:"user"`
|
||||
Password string `config:"password"`
|
||||
SSL SSL `config:"ssl"`
|
||||
Eventstream EventstreamConfig `config:"eventstream"`
|
||||
Statuspoller StatuspollerConfig `config:"statuspoller"`
|
||||
}
|
||||
|
||||
// SSL options
|
||||
type SSL struct {
|
||||
Verify bool `config:"verify"`
|
||||
CertificateAuthorities []string `config:"certificate_authorities"`
|
||||
Host string `config:"host"`
|
||||
Port int `config:"port"`
|
||||
User string `config:"user"`
|
||||
Password string `config:"password"`
|
||||
SkipSSLVerify bool `config:"skip_ssl_verify"`
|
||||
Eventstream EventstreamConfig `config:"eventstream"`
|
||||
Statuspoller StatuspollerConfig `config:"statuspoller"`
|
||||
}
|
||||
|
||||
// EventstreamConfig optoins
|
||||
|
@ -1,9 +0,0 @@
|
||||
dashboards:
|
||||
- id: 34e97340-e4ce-11e7-b4d1-8383451ae5a4
|
||||
file: Icingabeat-CheckResults.json
|
||||
|
||||
- id: ed031e90-e4ce-11e7-b4d1-8383451ae5a4
|
||||
file: Icingabeat-Notifications.json
|
||||
|
||||
- id: a13f1a80-e4cf-11e7-b4d1-8383451ae5a4
|
||||
file: Icingabeat-Status.json
|
@ -1 +0,0 @@
|
||||
{"uuid":"0409fabd-956a-4000-9090-22c9c0b438af","first_start":"2022-05-31T13:14:26.86643+02:00"}
|
@ -1,38 +0,0 @@
|
||||
# Icingabeat
|
||||
|
||||
Icingabeat is an [Elastic Beat](https://www.elastic.co/products/beats) that
|
||||
fetches data from the Icinga 2 API and sends it either directly to Elasticsearch
|
||||
or Logstash.
|
||||
|
||||
> The Beats are lightweight data shippers, written in Go, that you install on
|
||||
> your servers to capture all sorts of operational data (think of logs,
|
||||
> metrics, or network packet data). The Beats send the operational data to
|
||||
> Elasticsearch, either directly or via Logstash, so it can be visualized with
|
||||
> Kibana.
|
||||
|
||||
 | 
|
||||
-------------------------------------------------|-------------------------------------
|
||||
|
||||
## Eventstream
|
||||
|
||||
Icingabeat listens to an eventstream published by the Icinga 2 API. This stream
|
||||
includes detailed information about events, such as checkresults, notifications,
|
||||
downtimes, acknowledgemts and many other event types. There is no polling
|
||||
involved in this mode. The configuration section describes how to limit the
|
||||
amount of data you receive by setting types and filters.
|
||||
|
||||
Example use cases:
|
||||
* Correlate monitoring data with logging information
|
||||
* Retrace notifications sent by Icinga 2
|
||||
* Find bottlenecks in execution time and latency of service checks
|
||||
|
||||
## Statuspoller
|
||||
|
||||
The Icinga 2 API exports a lot of information about the state of the Icinga 2
|
||||
daemon. Icingabeat can poll these information periodically.
|
||||
|
||||
Example use cases:
|
||||
* Visualize metrics of the Icinga 2 daemon
|
||||
* Get insights how each Icinga 2 feature performs
|
||||
* Information about zones and endpoints
|
||||
* Compare Icinga servers with each other
|
@ -1,96 +0,0 @@
|
||||
# Installation
|
||||
|
||||
## Packages
|
||||
Packages are available on [packages.icinga.com](https://packages.icinga.com).
|
||||
|
||||
Depending on your distribution and version you need to run one of the following
|
||||
commands:
|
||||
|
||||
#### Debian
|
||||
|
||||
``` shell
|
||||
wget -O - https://packages.icinga.com/icinga.key | apt-key add -
|
||||
echo 'deb http://packages.icinga.com/debian icinga-stretch main' > etc/apt/sources.list.d/icinga.list
|
||||
```
|
||||
|
||||
``` shell
|
||||
apt-get update
|
||||
apt-get install icingabeat
|
||||
```
|
||||
|
||||
#### Ubuntu
|
||||
|
||||
``` shell
|
||||
wget -O - https://packages.icinga.com/icinga.key | apt-key add -
|
||||
echo 'deb http://packages.icinga.com/ubuntu icinga-xenial main' > etc/apt/sources.list.d/icinga.list
|
||||
```
|
||||
|
||||
``` shell
|
||||
apt-get update
|
||||
apt-get install icingabeat
|
||||
```
|
||||
|
||||
#### CentOS
|
||||
|
||||
``` shell
|
||||
yum install epel-release
|
||||
rpm --import https://packages.icinga.com/icinga.key
|
||||
yum install https://packages.icinga.com/epel/icinga-rpm-release-7-latest.noarch.rpm
|
||||
```
|
||||
|
||||
``` shell
|
||||
yum install icingabeat
|
||||
```
|
||||
|
||||
#### RHEL
|
||||
|
||||
``` shell
|
||||
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
|
||||
rpm --import https://packages.icinga.com/icinga.key
|
||||
yum install https://packages.icinga.com/epel/icinga-rpm-release-7-latest.noarch.rpm
|
||||
```
|
||||
|
||||
``` shell
|
||||
yum install icingabeat
|
||||
```
|
||||
|
||||
#### SLES
|
||||
|
||||
``` shell
|
||||
rpm --import https://packages.icinga.com/icinga.key
|
||||
zypper ar https://packages.icinga.com/SUSE/ICINGA-release.repo
|
||||
zypper ref
|
||||
```
|
||||
|
||||
``` shell
|
||||
zypper install icingabeat
|
||||
```
|
||||
|
||||
### Run
|
||||
Make sure you have configured Icingabeat properly before starting it. Use one
|
||||
of the following commands to start Icingabeat:
|
||||
|
||||
* `service icingabeat start` or
|
||||
* `systemctl start icingabeat` or
|
||||
* `/etc/init.d/icingabeat start`
|
||||
|
||||
## Dashboards
|
||||
We have dashboards prepared that you can use when getting started with
|
||||
Icingabeat. They are meant to give you some inspiration before you start
|
||||
exploring the data by yourself.
|
||||
|
||||
Starting with icingabeat v7.17.4 you have to download and import the dashboards manually.
|
||||
|
||||
Download and upack `dashboards.zip` from the [latest release](https://github.com/Icinga/icingabeat/releases/latest) page.
|
||||
|
||||
Use Kibana's Import functionality to upload the `*.ndjson` files which will
|
||||
import a bunch of saved objects, including dashboards and single visualizations.
|
||||
|
||||
## Manual Installation
|
||||
|
||||
Download and install a package or tarball from the
|
||||
[latest release](https://github.com/Icinga/icingabeat/releases/latest) page.
|
||||
|
||||
## Development
|
||||
Please follow [README.md](https://github.com/icinga/icingabeat/README.md) for
|
||||
instructions about how to build icingabeat.
|
@ -1,109 +0,0 @@
|
||||
# Configuration
|
||||
Configuration of Icingabeat is split into 3 sections: Connection, Evenstream and
|
||||
Statuspoller. On Linux configuration files are located at `/etc/icingabeat`
|
||||
|
||||
## Connection
|
||||
Settings in this section apply to both modes.
|
||||
|
||||
### `host`
|
||||
Hostname of Icinga 2 API. This can be either an IP address or domain.
|
||||
Defaults to `localhost`
|
||||
|
||||
### `port`
|
||||
Defaults to `5665`
|
||||
|
||||
### `user`
|
||||
Username to be used for the API connection. You need to create this user in your Icinga 2 configuration. Make sure that it has sufficient permissions to read the
|
||||
data you want to collect.
|
||||
|
||||
Here is an example of an API user in your Icinga 2 configuration:
|
||||
|
||||
``` c++
|
||||
object ApiUser "icinga" {
|
||||
password = "icinga"
|
||||
permissions = ["events/*", "status/query"]
|
||||
}
|
||||
```
|
||||
|
||||
Learn more about the `ApiUser` and its permissions in the
|
||||
[Icinga 2 docs](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api#icinga2-api-permissions).
|
||||
|
||||
### `password`
|
||||
Defaults to `icinga`
|
||||
|
||||
|
||||
### `ssl.verify`
|
||||
Configure SSL verification. If `false` is configured, all server hosts and
|
||||
certificates will be accepted. In this mode, SSL based connections are
|
||||
susceptible to man-in-the-middle attacks. Use only for testing. Default is
|
||||
`true`.
|
||||
|
||||
### `ssl.certificate_authorities`
|
||||
List of root certificates for HTTPS server verifications
|
||||
|
||||
Example:
|
||||
```
|
||||
ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||
```
|
||||
|
||||
## Eventstream
|
||||
Settings in this section apply to the eventstream mode. To disable the
|
||||
eventstream completely, comment out the section.
|
||||
|
||||
### `eventstream.types`
|
||||
You can select which particular Icinga 2 events you want to receive and store.
|
||||
The following types are available, you must set at least one:
|
||||
|
||||
* `CheckResult`
|
||||
* `StateChange`
|
||||
* `Notification`
|
||||
* `AcknowledgementSet`
|
||||
* `AcknowledgementCleared`
|
||||
* `CommentAdded`
|
||||
* `CommentRemoved`
|
||||
* `DowntimeAdded`
|
||||
* `DowntimeRemoved`
|
||||
* `DowntimeStarted`
|
||||
* `DowntimeTriggered`
|
||||
|
||||
To set multiple types, do the following:
|
||||
|
||||
```yaml
|
||||
eventstream.types:
|
||||
- CheckResult
|
||||
- StateChange
|
||||
- Notification
|
||||
- AcknowledgementSet
|
||||
- AcknowledgementCleared
|
||||
```
|
||||
|
||||
### `eventstream.filter`
|
||||
In addition to selecting the types of events, you can filter them by
|
||||
attributes using the prefix `event.`. By default no filter is set.
|
||||
|
||||
###### Examples
|
||||
|
||||
Only check results with the exit code 2:
|
||||
```yaml
|
||||
eventstream.filter: "event.check_result.exit_status==2"
|
||||
```
|
||||
|
||||
Only check results of services that match `mysql*`:
|
||||
```yaml
|
||||
eventstream.filter: 'match("mysql*", event.service)'
|
||||
```
|
||||
|
||||
### `eventstream.retry_interval`
|
||||
On a connection loss Icingabeat will try to reconnect to the API periodically.
|
||||
This setting defines the interval for connection retries. Defaults to `10s`
|
||||
|
||||
## Statuspoller
|
||||
Settings of this section apply to the statuspoller mode.
|
||||
|
||||
### `statuspoller.interval`
|
||||
Interval at which the status API is called. Set to `0` to disable polling.
|
||||
Defaults to `60s`
|
||||
|
||||
## Fields
|
||||
Icingabeat exports a bunch of fields. Have a look to the
|
||||
[fields.asciidoc](https://github.com/Icinga/icingabeat/blob/master/docs/fields.asciidoc) for details.
|
15794
docs/fields.asciidoc
15794
docs/fields.asciidoc
File diff suppressed because it is too large
Load Diff
10937
fields.yml
10937
fields.yml
File diff suppressed because it is too large
Load Diff
155
go.mod
155
go.mod
@ -1,155 +0,0 @@
|
||||
module github.com/icinga/icingabeat
|
||||
|
||||
go 1.17
|
||||
|
||||
require (
|
||||
github.com/tsg/go-daemon v0.0.0-20200207173439-e704b93fd89b
|
||||
github.com/blakesmith/ar v0.0.0-20150311145944-8bd4349a67f2
|
||||
github.com/Microsoft/go-winio v0.5.1 // indirect
|
||||
github.com/Shopify/sarama v0.0.0-00010101000000-000000000000 // indirect
|
||||
github.com/StackExchange/wmi v0.0.0-20170221213301-9f32b5905fd6 // indirect
|
||||
github.com/akavel/rsrc v0.8.0 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.1.1 // indirect
|
||||
github.com/dlclark/regexp2 v1.1.7-0.20171009020623-7632a260cbaf // indirect
|
||||
github.com/docker/docker v1.4.2-0.20190924003213-a8608b5b67c7 // indirect
|
||||
github.com/docker/go-connections v0.4.0 // indirect
|
||||
github.com/docker/go-units v0.4.0 // indirect
|
||||
github.com/dop251/goja v0.0.0-20200831102558-9af81ddcf0e1 // indirect
|
||||
github.com/dop251/goja_nodejs v0.0.0-20171011081505-adff31b136e6 // indirect
|
||||
github.com/dustin/go-humanize v1.0.0 // indirect
|
||||
github.com/eapache/go-resiliency v1.2.0 // indirect
|
||||
github.com/elastic/ecs v1.12.0 // indirect
|
||||
github.com/elastic/elastic-agent-client/v7 v7.0.0-20210727140539-f0905d9377f6 // indirect
|
||||
github.com/elastic/go-concert v0.2.0 // indirect
|
||||
github.com/elastic/go-lumber v0.1.0 // indirect
|
||||
github.com/elastic/go-seccomp-bpf v1.2.0 // indirect
|
||||
github.com/elastic/go-structform v0.0.9 // indirect
|
||||
github.com/elastic/go-sysinfo v1.7.1 // indirect
|
||||
github.com/elastic/go-txfile v0.0.7 // indirect
|
||||
github.com/elastic/go-ucfg v0.8.3 // indirect
|
||||
github.com/elastic/go-windows v1.0.1 // indirect
|
||||
github.com/elastic/gosigar v0.14.2 // indirect
|
||||
github.com/fatih/color v1.9.0 // indirect
|
||||
github.com/go-ole/go-ole v1.2.5-0.20190920104607-14974a1cf647 // indirect
|
||||
github.com/go-sourcemap/sourcemap v2.1.2+incompatible // indirect
|
||||
github.com/gofrs/flock v0.7.2-0.20190320160742-5135e617513b // indirect
|
||||
github.com/gofrs/uuid v3.3.0+incompatible // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang/protobuf v1.5.2 // indirect
|
||||
github.com/golang/snappy v0.0.4 // indirect
|
||||
github.com/gomodule/redigo v1.8.3 // indirect
|
||||
github.com/google/go-cmp v0.5.6 // indirect
|
||||
github.com/h2non/filetype v1.1.1 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.0 // indirect
|
||||
github.com/hashicorp/golang-lru v0.5.4 // indirect
|
||||
github.com/joeshaw/multierror v0.0.0-20140124173710-69b34d4ec901 // indirect
|
||||
github.com/jonboulle/clockwork v0.2.2 // indirect
|
||||
github.com/josephspurrier/goversioninfo v0.0.0-20190209210621-63e6d1acd3dd // indirect
|
||||
github.com/magefile/mage v1.11.0
|
||||
github.com/mattn/go-colorable v0.1.6 // indirect
|
||||
github.com/miekg/dns v1.1.25 // indirect
|
||||
github.com/mitchellh/hashstructure v0.0.0-20170116052023-ab25296c0f51 // indirect
|
||||
github.com/opencontainers/go-digest v1.0.0 // indirect
|
||||
github.com/opencontainers/image-spec v1.0.2-0.20190823105129-775207bd45b6 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/prometheus/procfs v0.6.0 // indirect
|
||||
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
|
||||
github.com/shirou/gopsutil v3.20.12+incompatible // indirect
|
||||
github.com/spf13/cobra v1.0.0 // indirect
|
||||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
github.com/urso/sderr v0.0.0-20210525210834-52b04e8f5c71 // indirect
|
||||
github.com/xdg/scram v1.0.3 // indirect
|
||||
go.elastic.co/apm v1.11.0 // indirect
|
||||
go.elastic.co/apm/module/apmelasticsearch v1.7.2 // indirect
|
||||
go.elastic.co/apm/module/apmhttp v1.7.2 // indirect
|
||||
go.elastic.co/ecszap v0.3.0 // indirect
|
||||
go.uber.org/atomic v1.5.0 // indirect
|
||||
go.uber.org/multierr v1.3.0 // indirect
|
||||
go.uber.org/zap v1.14.0 // indirect
|
||||
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e // indirect
|
||||
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect
|
||||
golang.org/x/net v0.0.0-20211020060615-d418f374d309 // indirect
|
||||
golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1 // indirect
|
||||
golang.org/x/sys v0.0.0-20211102192858-4dd72447c267 // indirect
|
||||
golang.org/x/text v0.3.7 // indirect
|
||||
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
|
||||
golang.org/x/tools v0.1.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20211021150943-2b146023228c // indirect
|
||||
google.golang.org/grpc v1.41.0 // indirect
|
||||
google.golang.org/protobuf v1.27.1 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/jcmturner/aescts.v1 v1.0.1 // indirect
|
||||
gopkg.in/jcmturner/dnsutils.v1 v1.0.1 // indirect
|
||||
gopkg.in/jcmturner/goidentity.v3 v3.0.0 // indirect
|
||||
gopkg.in/jcmturner/gokrb5.v7 v7.5.0 // indirect
|
||||
gopkg.in/jcmturner/rpc.v1 v1.1.0 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
howett.net/plist v0.0.0-20181124034731-591f970eefbb // indirect
|
||||
k8s.io/api v0.21.1 // indirect
|
||||
k8s.io/apimachinery v0.21.1 // indirect
|
||||
k8s.io/client-go v0.21.1 // indirect
|
||||
)
|
||||
|
||||
require github.com/elastic/beats/v7 v7.17.4
|
||||
|
||||
require (
|
||||
github.com/BurntSushi/toml v0.3.1 // indirect
|
||||
github.com/armon/go-radix v1.0.0 // indirect
|
||||
github.com/containerd/containerd v1.5.7 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/docker/distribution v2.8.0+incompatible // indirect
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 // indirect
|
||||
github.com/eapache/queue v1.1.0 // indirect
|
||||
github.com/go-logr/logr v0.4.0 // indirect
|
||||
github.com/google/gofuzz v1.1.0 // indirect
|
||||
github.com/googleapis/gnostic v0.4.1 // indirect
|
||||
github.com/hashicorp/errwrap v1.0.0 // indirect
|
||||
github.com/hashicorp/go-uuid v1.0.2 // indirect
|
||||
github.com/imdario/mergo v0.3.12 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.0.0 // indirect
|
||||
github.com/jcmturner/aescts/v2 v2.0.0 // indirect
|
||||
github.com/jcmturner/dnsutils/v2 v2.0.0 // indirect
|
||||
github.com/jcmturner/gofork v1.0.0 // indirect
|
||||
github.com/jcmturner/gokrb5/v8 v8.4.2 // indirect
|
||||
github.com/jcmturner/rpc/v2 v2.0.3 // indirect
|
||||
github.com/json-iterator/go v1.1.11 // indirect
|
||||
github.com/klauspost/compress v1.13.6 // indirect
|
||||
github.com/mattn/go-isatty v0.0.12 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.1 // indirect
|
||||
github.com/pierrec/lz4 v2.6.0+incompatible // indirect
|
||||
github.com/santhosh-tekuri/jsonschema v1.2.4 // indirect
|
||||
github.com/sirupsen/logrus v1.8.1 // indirect
|
||||
github.com/urso/diag v0.0.0-20200210123136-21b3cc8eb797 // indirect
|
||||
github.com/urso/go-bin v0.0.0-20180220135811-781c575c9f0e // indirect
|
||||
github.com/urso/magetools v0.0.0-20190919040553-290c89e0c230 // indirect
|
||||
github.com/xdg/stringprep v1.0.3 // indirect
|
||||
go.elastic.co/fastjson v1.1.0 // indirect
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee // indirect
|
||||
golang.org/x/mod v0.4.2 // indirect
|
||||
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d // indirect
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
honnef.co/go/tools v0.0.1-2020.1.4 // indirect
|
||||
k8s.io/klog/v2 v2.8.0 // indirect
|
||||
k8s.io/utils v0.0.0-20201110183641-67b214c5f920 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.1.0 // indirect
|
||||
sigs.k8s.io/yaml v1.2.0 // indirect
|
||||
)
|
||||
|
||||
replace (
|
||||
github.com/Microsoft/go-winio => github.com/bi-zone/go-winio v0.4.15
|
||||
github.com/Shopify/sarama => github.com/elastic/sarama v1.19.1-0.20210823122811-11c3ef800752
|
||||
github.com/apoydence/eachers => github.com/poy/eachers v0.0.0-20181020210610-23942921fe77 //indirect, see https://github.com/elastic/beats/pull/29780 for details.
|
||||
github.com/cucumber/godog => github.com/cucumber/godog v0.8.1
|
||||
github.com/docker/docker => github.com/docker/engine v0.0.0-20191113042239-ea84732a7725
|
||||
github.com/docker/go-plugins-helpers => github.com/elastic/go-plugins-helpers v0.0.0-20200207104224-bdf17607b79f
|
||||
github.com/dop251/goja => github.com/andrewkroh/goja v0.0.0-20190128172624-dd2ac4456e20
|
||||
github.com/dop251/goja_nodejs => github.com/dop251/goja_nodejs v0.0.0-20171011081505-adff31b136e6
|
||||
github.com/fsnotify/fsevents => github.com/elastic/fsevents v0.0.0-20181029231046-e1d381a4d270
|
||||
github.com/fsnotify/fsnotify => github.com/adriansr/fsnotify v1.4.8-0.20211018144411-a81f2b630e7c
|
||||
github.com/golang/glog => github.com/elastic/glog v1.0.1-0.20210831205241-7d8b5c89dfc4
|
||||
github.com/google/gopacket => github.com/adriansr/gopacket v1.1.18-0.20200327165309-dd62abfa8a41
|
||||
github.com/insomniacslk/dhcp => github.com/elastic/dhcp v0.0.0-20200227161230-57ec251c7eb3 // indirect
|
||||
github.com/tonistiigi/fifo => github.com/containerd/fifo v0.0.0-20190816180239-bda0ff6ed73c
|
||||
)
|
@ -1,9 +0,0 @@
|
||||
|
||||
processors:
|
||||
- add_cloud_metadata: ~
|
||||
- add_docker_metadata: ~
|
||||
|
||||
output.elasticsearch:
|
||||
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
|
||||
username: '${ELASTICSEARCH_USERNAME:}'
|
||||
password: '${ELASTICSEARCH_PASSWORD:}'
|
695
icingabeat.full.yml
Normal file
695
icingabeat.full.yml
Normal file
@ -0,0 +1,695 @@
|
||||
################### Icingabeat Configuration Example #########################
|
||||
|
||||
############################# Icingabeat ######################################
|
||||
|
||||
icingabeat:
|
||||
|
||||
# Defines the Icinga API endpoint
|
||||
host: "localhost"
|
||||
|
||||
# Defines the port of the API endpoint
|
||||
port: 5665
|
||||
|
||||
# A user with sufficient permissions
|
||||
user: "icinga"
|
||||
|
||||
# Password of the user
|
||||
password: "icinga"
|
||||
|
||||
# Skip SSL verification
|
||||
skip_ssl_verify: false
|
||||
|
||||
# Icingabeat supports capturing of an evenstream and periodical polling of the
|
||||
# Icinga status data.
|
||||
eventstream:
|
||||
#
|
||||
# Decide which events to receive from the event stream.
|
||||
# The following event stream types are available:
|
||||
#
|
||||
# * CheckResult
|
||||
# * StateChange
|
||||
# * Notification
|
||||
# * AcknowledgementSet
|
||||
# * AcknowledgementCleared
|
||||
# * CommentAdded
|
||||
# * CommentRemoved
|
||||
# * DowntimeAdded
|
||||
# * DowntimeRemoved
|
||||
# * DowntimeStarted
|
||||
# * DowntimeTriggered
|
||||
#
|
||||
# To disable eventstream, leave the types empty or comment out the option
|
||||
types:
|
||||
- CheckResult
|
||||
- StateChange
|
||||
|
||||
# Event streams can be filtered by attributes using the prefix 'event.'
|
||||
#
|
||||
# Example for the CheckResult type with the exit_code set to 2:
|
||||
# filter: "event.check_result.exit_status==2"
|
||||
#
|
||||
# Example for the CheckResult type with the service matching the string
|
||||
# pattern "mysql*":
|
||||
# filter: 'match("mysql*", event.service)'
|
||||
#
|
||||
# To disable filtering set an empty string or comment out the filter option
|
||||
filter: ""
|
||||
|
||||
# Defines how fast to reconnect to the API on connection loss
|
||||
retry_interval: 10s
|
||||
|
||||
statuspoller:
|
||||
# Interval at which the status API is called. Set to 0 to disable polling.
|
||||
interval: 60s
|
||||
|
||||
#================================ General ======================================
|
||||
|
||||
# The name of the shipper that publishes the network data. It can be used to group
|
||||
# all the transactions sent by a single shipper in the web interface.
|
||||
# If this options is not defined, the hostname is used.
|
||||
#name:
|
||||
|
||||
# The tags of the shipper are included in their own field with each
|
||||
# transaction published. Tags make it easy to group servers by different
|
||||
# logical properties.
|
||||
#tags: ["service-X", "web-tier"]
|
||||
|
||||
# Optional fields that you can specify to add additional information to the
|
||||
# output. Fields can be scalar values, arrays, dictionaries, or any nested
|
||||
# combination of these.
|
||||
#fields:
|
||||
# env: staging
|
||||
|
||||
# If this option is set to true, the custom fields are stored as top-level
|
||||
# fields in the output document instead of being grouped under a fields
|
||||
# sub-dictionary. Default is false.
|
||||
#fields_under_root: false
|
||||
|
||||
# Internal queue size for single events in processing pipeline
|
||||
#queue_size: 1000
|
||||
|
||||
# The internal queue size for bulk events in the processing pipeline.
|
||||
# Do not modify this value.
|
||||
#bulk_queue_size: 0
|
||||
|
||||
# Sets the maximum number of CPUs that can be executing simultaneously. The
|
||||
# default is the number of logical CPUs available in the system.
|
||||
#max_procs:
|
||||
|
||||
#================================ Processors ===================================
|
||||
|
||||
# Processors are used to reduce the number of fields in the exported event or to
|
||||
# enhance the event with external metadata. This section defines a list of
|
||||
# processors that are applied one by one and the first one receives the initial
|
||||
# event:
|
||||
#
|
||||
# event -> filter1 -> event1 -> filter2 ->event2 ...
|
||||
#
|
||||
# The supported processors are drop_fields, drop_event, include_fields, and
|
||||
# add_cloud_metadata.
|
||||
#
|
||||
# For example, you can use the following processors to keep the fields that
|
||||
# contain CPU load percentages, but remove the fields that contain CPU ticks
|
||||
# values:
|
||||
#
|
||||
#processors:
|
||||
#- include_fields:
|
||||
# fields: ["cpu"]
|
||||
#- drop_fields:
|
||||
# fields: ["cpu.user", "cpu.system"]
|
||||
#
|
||||
# The following example drops the events that have the HTTP response code 200:
|
||||
#
|
||||
#processors:
|
||||
#- drop_event:
|
||||
# when:
|
||||
# equals:
|
||||
# http.code: 200
|
||||
#
|
||||
# The following example enriches each event with metadata from the cloud
|
||||
# provider about the host machine. It works on EC2, GCE, and DigitalOcean.
|
||||
#
|
||||
#processors:
|
||||
#- add_cloud_metadata:
|
||||
#
|
||||
|
||||
#================================ Outputs ======================================
|
||||
|
||||
# Configure what outputs to use when sending the data collected by the beat.
|
||||
# Multiple outputs may be used.
|
||||
|
||||
#-------------------------- Elasticsearch output -------------------------------
|
||||
output.elasticsearch:
|
||||
# Boolean flag to enable or disable the output module.
|
||||
#enabled: true
|
||||
|
||||
# Array of hosts to connect to.
|
||||
# Scheme and port can be left out and will be set to the default (http and 9200)
|
||||
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
|
||||
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
|
||||
hosts: ["localhost:9200"]
|
||||
|
||||
# Set gzip compression level.
|
||||
#compression_level: 0
|
||||
|
||||
# Optional protocol and basic auth credentials.
|
||||
#protocol: "https"
|
||||
#username: "elastic"
|
||||
#password: "changeme"
|
||||
|
||||
# Dictionary of HTTP parameters to pass within the url with index operations.
|
||||
#parameters:
|
||||
#param1: value1
|
||||
#param2: value2
|
||||
|
||||
# Number of workers per Elasticsearch host.
|
||||
#worker: 1
|
||||
|
||||
# Optional index name. The default is "icingabeat" plus date
|
||||
# and generates [icingabeat-]YYYY.MM.DD keys.
|
||||
#index: "icingabeat-%{+yyyy.MM.dd}"
|
||||
|
||||
# Optional ingest node pipeline. By default no pipeline will be used.
|
||||
#pipeline: ""
|
||||
|
||||
# Optional HTTP Path
|
||||
#path: "/elasticsearch"
|
||||
|
||||
# Custom HTTP headers to add to each request
|
||||
#headers:
|
||||
# X-My-Header: Contents of the header
|
||||
|
||||
# Proxy server url
|
||||
#proxy_url: http://proxy:3128
|
||||
|
||||
# The number of times a particular Elasticsearch index operation is attempted. If
|
||||
# the indexing operation doesn't succeed after this many retries, the events are
|
||||
# dropped. The default is 3.
|
||||
#max_retries: 3
|
||||
|
||||
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
|
||||
# The default is 50.
|
||||
#bulk_max_size: 50
|
||||
|
||||
# Configure http request timeout before failing an request to Elasticsearch.
|
||||
#timeout: 90
|
||||
|
||||
# The number of seconds to wait for new events between two bulk API index requests.
|
||||
# If `bulk_max_size` is reached before this interval expires, addition bulk index
|
||||
# requests are made.
|
||||
#flush_interval: 1s
|
||||
|
||||
# A template is used to set the mapping in Elasticsearch
|
||||
# By default template loading is enabled and the template is loaded.
|
||||
# These settings can be adjusted to load your own template or overwrite existing ones.
|
||||
|
||||
# Set to false to disable template loading.
|
||||
#template.enabled: true
|
||||
|
||||
# Template name. By default the template name is icingabeat.
|
||||
#template.name: "icingabeat"
|
||||
|
||||
# Path to template file
|
||||
#template.path: "${path.config}/icingabeat.template.json"
|
||||
|
||||
# Overwrite existing template
|
||||
#template.overwrite: false
|
||||
|
||||
# If set to true, icingabeat checks the Elasticsearch version at connect time, and if it
|
||||
# is 2.x, it loads the file specified by the template.versions.2x.path setting. The
|
||||
# default is true.
|
||||
#template.versions.2x.enabled: true
|
||||
|
||||
# Path to the Elasticsearch 2.x version of the template file.
|
||||
#template.versions.2x.path: "${path.config}/icingabeat.template-es2x.json"
|
||||
|
||||
# Use SSL settings for HTTPS. Default is true.
|
||||
#ssl.enabled: true
|
||||
|
||||
# Configure SSL verification mode. If `none` is configured, all server hosts
|
||||
# and certificates will be accepted. In this mode, SSL based connections are
|
||||
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
|
||||
# `full`.
|
||||
#ssl.verification_mode: full
|
||||
|
||||
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
|
||||
# 1.2 are enabled.
|
||||
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
|
||||
|
||||
# SSL configuration. By default is off.
|
||||
# List of root certificates for HTTPS server verifications
|
||||
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||
|
||||
# Certificate for SSL client authentication
|
||||
#ssl.certificate: "/etc/pki/client/cert.pem"
|
||||
|
||||
# Client Certificate Key
|
||||
#ssl.key: "/etc/pki/client/cert.key"
|
||||
|
||||
# Optional passphrase for decrypting the Certificate Key.
|
||||
#ssl.key_passphrase: ''
|
||||
|
||||
# Configure cipher suites to be used for SSL connections
|
||||
#ssl.cipher_suites: []
|
||||
|
||||
# Configure curve types for ECDHE based cipher suites
|
||||
#ssl.curve_types: []
|
||||
|
||||
|
||||
#----------------------------- Logstash output ---------------------------------
|
||||
#output.logstash:
|
||||
# Boolean flag to enable or disable the output module.
|
||||
#enabled: true
|
||||
|
||||
# The Logstash hosts
|
||||
#hosts: ["localhost:5044"]
|
||||
|
||||
# Number of workers per Logstash host.
|
||||
#worker: 1
|
||||
|
||||
# Set gzip compression level.
|
||||
#compression_level: 3
|
||||
|
||||
# Optional load balance the events between the Logstash hosts
|
||||
#loadbalance: true
|
||||
|
||||
# Number of batches to be send asynchronously to logstash while processing
|
||||
# new batches.
|
||||
#pipelining: 0
|
||||
|
||||
# Optional index name. The default index name is set to name of the beat
|
||||
# in all lowercase.
|
||||
#index: 'icingabeat'
|
||||
|
||||
# SOCKS5 proxy server URL
|
||||
#proxy_url: socks5://user:password@socks5-server:2233
|
||||
|
||||
# Resolve names locally when using a proxy server. Defaults to false.
|
||||
#proxy_use_local_resolver: false
|
||||
|
||||
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
|
||||
#ssl.enabled: true
|
||||
|
||||
# Configure SSL verification mode. If `none` is configured, all server hosts
|
||||
# and certificates will be accepted. In this mode, SSL based connections are
|
||||
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
|
||||
# `full`.
|
||||
#ssl.verification_mode: full
|
||||
|
||||
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
|
||||
# 1.2 are enabled.
|
||||
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
|
||||
|
||||
# Optional SSL configuration options. SSL is off by default.
|
||||
# List of root certificates for HTTPS server verifications
|
||||
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||
|
||||
# Certificate for SSL client authentication
|
||||
#ssl.certificate: "/etc/pki/client/cert.pem"
|
||||
|
||||
# Client Certificate Key
|
||||
#ssl.key: "/etc/pki/client/cert.key"
|
||||
|
||||
# Optional passphrase for decrypting the Certificate Key.
|
||||
#ssl.key_passphrase: ''
|
||||
|
||||
# Configure cipher suites to be used for SSL connections
|
||||
#ssl.cipher_suites: []
|
||||
|
||||
# Configure curve types for ECDHE based cipher suites
|
||||
#ssl.curve_types: []
|
||||
|
||||
#------------------------------- Kafka output ----------------------------------
|
||||
#output.kafka:
|
||||
# Boolean flag to enable or disable the output module.
|
||||
#enabled: true
|
||||
|
||||
# The list of Kafka broker addresses from where to fetch the cluster metadata.
|
||||
# The cluster metadata contain the actual Kafka brokers events are published
|
||||
# to.
|
||||
#hosts: ["localhost:9092"]
|
||||
|
||||
# The Kafka topic used for produced events. The setting can be a format string
|
||||
# using any event field. To set the topic from document type use `%{[type]}`.
|
||||
#topic: beats
|
||||
|
||||
# The Kafka event key setting. Use format string to create unique event key.
|
||||
# By default no event key will be generated.
|
||||
#key: ''
|
||||
|
||||
# The Kafka event partitioning strategy. Default hashing strategy is `hash`
|
||||
# using the `output.kafka.key` setting or randomly distributes events if
|
||||
# `output.kafka.key` is not configured.
|
||||
#partition.hash:
|
||||
# If enabled, events will only be published to partitions with reachable
|
||||
# leaders. Default is false.
|
||||
#reachable_only: false
|
||||
|
||||
# Configure alternative event field names used to compute the hash value.
|
||||
# If empty `output.kafka.key` setting will be used.
|
||||
# Default value is empty list.
|
||||
#hash: []
|
||||
|
||||
# Authentication details. Password is required if username is set.
|
||||
#username: ''
|
||||
#password: ''
|
||||
|
||||
# Kafka version icingabeat is assumed to run against. Defaults to the oldest
|
||||
# supported stable version (currently version 0.8.2.0)
|
||||
#version: 0.8.2
|
||||
|
||||
# Metadata update configuration. Metadata do contain leader information
|
||||
# deciding which broker to use when publishing.
|
||||
#metadata:
|
||||
# Max metadata request retry attempts when cluster is in middle of leader
|
||||
# election. Defaults to 3 retries.
|
||||
#retry.max: 3
|
||||
|
||||
# Waiting time between retries during leader elections. Default is 250ms.
|
||||
#retry.backoff: 250ms
|
||||
|
||||
# Refresh metadata interval. Defaults to every 10 minutes.
|
||||
#refresh_frequency: 10m
|
||||
|
||||
# The number of concurrent load-balanced Kafka output workers.
|
||||
#worker: 1
|
||||
|
||||
# The number of times to retry publishing an event after a publishing failure.
|
||||
# After the specified number of retries, the events are typically dropped.
|
||||
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
|
||||
# all events are published. Set max_retries to a value less than 0 to retry
|
||||
# until all events are published. The default is 3.
|
||||
#max_retries: 3
|
||||
|
||||
# The maximum number of events to bulk in a single Kafka request. The default
|
||||
# is 2048.
|
||||
#bulk_max_size: 2048
|
||||
|
||||
# The number of seconds to wait for responses from the Kafka brokers before
|
||||
# timing out. The default is 30s.
|
||||
#timeout: 30s
|
||||
|
||||
# The maximum duration a broker will wait for number of required ACKs. The
|
||||
# default is 10s.
|
||||
#broker_timeout: 10s
|
||||
|
||||
# The number of messages buffered for each Kafka broker. The default is 256.
|
||||
#channel_buffer_size: 256
|
||||
|
||||
# The keep-alive period for an active network connection. If 0s, keep-alives
|
||||
# are disabled. The default is 0 seconds.
|
||||
#keep_alive: 0
|
||||
|
||||
# Sets the output compression codec. Must be one of none, snappy and gzip. The
|
||||
# default is gzip.
|
||||
#compression: gzip
|
||||
|
||||
# The maximum permitted size of JSON-encoded messages. Bigger messages will be
|
||||
# dropped. The default value is 1000000 (bytes). This value should be equal to
|
||||
# or less than the broker's message.max.bytes.
|
||||
#max_message_bytes: 1000000
|
||||
|
||||
# The ACK reliability level required from broker. 0=no response, 1=wait for
|
||||
# local commit, -1=wait for all replicas to commit. The default is 1. Note:
|
||||
# If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
|
||||
# on error.
|
||||
#required_acks: 1
|
||||
|
||||
# The number of seconds to wait for new events between two producer API calls.
|
||||
#flush_interval: 1s
|
||||
|
||||
# The configurable ClientID used for logging, debugging, and auditing
|
||||
# purposes. The default is "beats".
|
||||
#client_id: beats
|
||||
|
||||
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
|
||||
#ssl.enabled: true
|
||||
|
||||
# Optional SSL configuration options. SSL is off by default.
|
||||
# List of root certificates for HTTPS server verifications
|
||||
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||
|
||||
# Configure SSL verification mode. If `none` is configured, all server hosts
|
||||
# and certificates will be accepted. In this mode, SSL based connections are
|
||||
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
|
||||
# `full`.
|
||||
#ssl.verification_mode: full
|
||||
|
||||
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
|
||||
# 1.2 are enabled.
|
||||
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
|
||||
|
||||
# Certificate for SSL client authentication
|
||||
#ssl.certificate: "/etc/pki/client/cert.pem"
|
||||
|
||||
# Client Certificate Key
|
||||
#ssl.key: "/etc/pki/client/cert.key"
|
||||
|
||||
# Optional passphrase for decrypting the Certificate Key.
|
||||
#ssl.key_passphrase: ''
|
||||
|
||||
# Configure cipher suites to be used for SSL connections
|
||||
#ssl.cipher_suites: []
|
||||
|
||||
# Configure curve types for ECDHE based cipher suites
|
||||
#ssl.curve_types: []
|
||||
|
||||
#------------------------------- Redis output ----------------------------------
|
||||
#output.redis:
|
||||
# Boolean flag to enable or disable the output module.
|
||||
#enabled: true
|
||||
|
||||
# The list of Redis servers to connect to. If load balancing is enabled, the
|
||||
# events are distributed to the servers in the list. If one server becomes
|
||||
# unreachable, the events are distributed to the reachable servers only.
|
||||
#hosts: ["localhost:6379"]
|
||||
|
||||
# The Redis port to use if hosts does not contain a port number. The default
|
||||
# is 6379.
|
||||
#port: 6379
|
||||
|
||||
# The name of the Redis list or channel the events are published to. The
|
||||
# default is icingabeat.
|
||||
#key: icingabeat
|
||||
|
||||
# The password to authenticate with. The default is no authentication.
|
||||
#password:
|
||||
|
||||
# The Redis database number where the events are published. The default is 0.
|
||||
#db: 0
|
||||
|
||||
# The Redis data type to use for publishing events. If the data type is list,
|
||||
# the Redis RPUSH command is used. If the data type is channel, the Redis
|
||||
# PUBLISH command is used. The default value is list.
|
||||
#datatype: list
|
||||
|
||||
# The number of workers to use for each host configured to publish events to
|
||||
# Redis. Use this setting along with the loadbalance option. For example, if
|
||||
# you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
|
||||
# host).
|
||||
#worker: 1
|
||||
|
||||
# If set to true and multiple hosts or workers are configured, the output
|
||||
# plugin load balances published events onto all Redis hosts. If set to false,
|
||||
# the output plugin sends all events to only one host (determined at random)
|
||||
# and will switch to another host if the currently selected one becomes
|
||||
# unreachable. The default value is true.
|
||||
#loadbalance: true
|
||||
|
||||
# The Redis connection timeout in seconds. The default is 5 seconds.
|
||||
#timeout: 5s
|
||||
|
||||
# The number of times to retry publishing an event after a publishing failure.
|
||||
# After the specified number of retries, the events are typically dropped.
|
||||
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
|
||||
# all events are published. Set max_retries to a value less than 0 to retry
|
||||
# until all events are published. The default is 3.
|
||||
#max_retries: 3
|
||||
|
||||
# The maximum number of events to bulk in a single Redis request or pipeline.
|
||||
# The default is 2048.
|
||||
#bulk_max_size: 2048
|
||||
|
||||
# The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The
|
||||
# value must be a URL with a scheme of socks5://.
|
||||
#proxy_url:
|
||||
|
||||
# This option determines whether Redis hostnames are resolved locally when
|
||||
# using a proxy. The default value is false, which means that name resolution
|
||||
# occurs on the proxy server.
|
||||
#proxy_use_local_resolver: false
|
||||
|
||||
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
|
||||
#ssl.enabled: true
|
||||
|
||||
# Configure SSL verification mode. If `none` is configured, all server hosts
|
||||
# and certificates will be accepted. In this mode, SSL based connections are
|
||||
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
|
||||
# `full`.
|
||||
#ssl.verification_mode: full
|
||||
|
||||
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
|
||||
# 1.2 are enabled.
|
||||
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
|
||||
|
||||
# Optional SSL configuration options. SSL is off by default.
|
||||
# List of root certificates for HTTPS server verifications
|
||||
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||
|
||||
# Certificate for SSL client authentication
|
||||
#ssl.certificate: "/etc/pki/client/cert.pem"
|
||||
|
||||
# Client Certificate Key
|
||||
#ssl.key: "/etc/pki/client/cert.key"
|
||||
|
||||
# Optional passphrase for decrypting the Certificate Key.
|
||||
#ssl.key_passphrase: ''
|
||||
|
||||
# Configure cipher suites to be used for SSL connections
|
||||
#ssl.cipher_suites: []
|
||||
|
||||
# Configure curve types for ECDHE based cipher suites
|
||||
#ssl.curve_types: []
|
||||
|
||||
|
||||
#------------------------------- File output -----------------------------------
|
||||
#output.file:
|
||||
# Boolean flag to enable or disable the output module.
|
||||
#enabled: true
|
||||
|
||||
# Path to the directory where to save the generated files. The option is
|
||||
# mandatory.
|
||||
#path: "/tmp/icingabeat"
|
||||
|
||||
# Name of the generated files. The default is `icingabeat` and it generates
|
||||
# files: `icingabeat`, `icingabeat.1`, `icingabeat.2`, etc.
|
||||
#filename: icingabeat
|
||||
|
||||
# Maximum size in kilobytes of each file. When this size is reached, and on
|
||||
# every icingabeat restart, the files are rotated. The default value is 10240
|
||||
# kB.
|
||||
#rotate_every_kb: 10000
|
||||
|
||||
# Maximum number of files under path. When this number of files is reached,
|
||||
# the oldest file is deleted and the rest are shifted from last to first. The
|
||||
# default is 7 files.
|
||||
#number_of_files: 7
|
||||
|
||||
|
||||
#----------------------------- Console output ---------------------------------
|
||||
#output.console:
|
||||
# Boolean flag to enable or disable the output module.
|
||||
#enabled: true
|
||||
|
||||
# Pretty print json event
|
||||
#pretty: false
|
||||
|
||||
#================================= Paths ======================================
|
||||
|
||||
# The home path for the icingabeat installation. This is the default base path
|
||||
# for all other path settings and for miscellaneous files that come with the
|
||||
# distribution (for example, the sample dashboards).
|
||||
# If not set by a CLI flag or in the configuration file, the default for the
|
||||
# home path is the location of the binary.
|
||||
#path.home:
|
||||
|
||||
# The configuration path for the icingabeat installation. This is the default
|
||||
# base path for configuration files, including the main YAML configuration file
|
||||
# and the Elasticsearch template file. If not set by a CLI flag or in the
|
||||
# configuration file, the default for the configuration path is the home path.
|
||||
#path.config: ${path.home}
|
||||
|
||||
# The data path for the icingabeat installation. This is the default base path
|
||||
# for all the files in which icingabeat needs to store its data. If not set by a
|
||||
# CLI flag or in the configuration file, the default for the data path is a data
|
||||
# subdirectory inside the home path.
|
||||
#path.data: ${path.home}/data
|
||||
|
||||
# The logs path for a icingabeat installation. This is the default location for
|
||||
# the Beat's log files. If not set by a CLI flag or in the configuration file,
|
||||
# the default for the logs path is a logs subdirectory inside the home path.
|
||||
#path.logs: ${path.home}/logs
|
||||
|
||||
#============================== Dashboards =====================================
|
||||
# These settings control loading the sample dashboards to the Kibana index. Loading
|
||||
# the dashboards is disabled by default and can be enabled either by setting the
|
||||
# options here, or by using the `-setup` CLI flag.
|
||||
#dashboards.enabled: false
|
||||
|
||||
# The URL from where to download the dashboards archive. By default this URL
|
||||
# has a value which is computed based on the Beat name and version. For released
|
||||
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
|
||||
# website.
|
||||
#dashboards.url:
|
||||
|
||||
# The directory from where to read the dashboards. It is used instead of the URL
|
||||
# when it has a value.
|
||||
#dashboards.directory:
|
||||
|
||||
# The file archive (zip file) from where to read the dashboards. It is used instead
|
||||
# of the URL when it has a value.
|
||||
#dashboards.file:
|
||||
|
||||
# If this option is enabled, the snapshot URL is used instead of the default URL.
|
||||
#dashboards.snapshot: false
|
||||
|
||||
# The URL from where to download the snapshot version of the dashboards. By default
|
||||
# this has a value which is computed based on the Beat name and version.
|
||||
#dashboards.snapshot_url
|
||||
|
||||
# In case the archive contains the dashboards from multiple Beats, this lets you
|
||||
# select which one to load. You can load all the dashboards in the archive by
|
||||
# setting this to the empty string.
|
||||
#dashboards.beat: icingabeat
|
||||
|
||||
# The name of the Kibana index to use for setting the configuration. Default is ".kibana"
|
||||
#dashboards.kibana_index: .kibana
|
||||
|
||||
# The Elasticsearch index name. This overwrites the index name defined in the
|
||||
# dashboards and index pattern. Example: testbeat-*
|
||||
#dashboards.index:
|
||||
|
||||
#================================ Logging ======================================
|
||||
# There are three options for the log output: syslog, file, stderr.
|
||||
# Under Windows systems, the log files are per default sent to the file output,
|
||||
# under all other system per default to syslog.
|
||||
|
||||
# Sets log level. The default log level is info.
|
||||
# Available log levels are: critical, error, warning, info, debug
|
||||
#logging.level: info
|
||||
|
||||
# Enable debug output for selected components. To enable all selectors use ["*"]
|
||||
# Other available selectors are "beat", "publish", "service"
|
||||
# Multiple selectors can be chained.
|
||||
#logging.selectors: [ ]
|
||||
|
||||
# Send all logging output to syslog. The default is false.
|
||||
#logging.to_syslog: true
|
||||
|
||||
# If enabled, icingabeat periodically logs its internal metrics that have changed
|
||||
# in the last period. For each metric that changed, the delta from the value at
|
||||
# the beginning of the period is logged. Also, the total values for
|
||||
# all non-zero internal metrics are logged on shutdown. The default is true.
|
||||
#logging.metrics.enabled: true
|
||||
|
||||
# The period after which to log the internal metrics. The default is 30s.
|
||||
#logging.metrics.period: 30s
|
||||
|
||||
# Logging to rotating files files. Set logging.to_files to false to disable logging to
|
||||
# files.
|
||||
logging.to_files: true
|
||||
logging.files:
|
||||
# Configure the path where the logs are written. The default is the logs directory
|
||||
# under the home path (the binary location).
|
||||
#path: /var/log/icingabeat
|
||||
|
||||
# The name of the files where the logs are written to.
|
||||
#name: icingabeat
|
||||
|
||||
# Configure log file size limit. If limit is reached, log file will be
|
||||
# automatically rotated
|
||||
#rotateeverybytes: 10485760 # = 10MB
|
||||
|
||||
# Number of rotated log files to keep. Oldest files will be deleted first.
|
||||
#keepfiles: 7
|
||||
|
File diff suppressed because it is too large
Load Diff
702
icingabeat.template-es2x.json
Normal file
702
icingabeat.template-es2x.json
Normal file
@ -0,0 +1,702 @@
|
||||
{
|
||||
"mappings": {
|
||||
"_default_": {
|
||||
"_all": {
|
||||
"norms": {
|
||||
"enabled": false
|
||||
}
|
||||
},
|
||||
"_meta": {
|
||||
"version": "1.1.0"
|
||||
},
|
||||
"date_detection": false,
|
||||
"dynamic_templates": [
|
||||
{
|
||||
"strings_as_keyword": {
|
||||
"mapping": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"match_mapping_type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"@timestamp": {
|
||||
"type": "date"
|
||||
},
|
||||
"acknowledgement_type": {
|
||||
"type": "long"
|
||||
},
|
||||
"author": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"beat": {
|
||||
"properties": {
|
||||
"hostname": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"name": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"check_result": {
|
||||
"properties": {
|
||||
"active": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"check_source": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"command": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"execution_end": {
|
||||
"type": "date"
|
||||
},
|
||||
"execution_start": {
|
||||
"type": "date"
|
||||
},
|
||||
"exit_status": {
|
||||
"type": "long"
|
||||
},
|
||||
"output": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"performance_data": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"schedule_end": {
|
||||
"type": "date"
|
||||
},
|
||||
"schedule_start": {
|
||||
"type": "date"
|
||||
},
|
||||
"state": {
|
||||
"type": "long"
|
||||
},
|
||||
"type": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"vars_after": {
|
||||
"properties": {
|
||||
"attempt": {
|
||||
"type": "long"
|
||||
},
|
||||
"reachable": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"state": {
|
||||
"type": "long"
|
||||
},
|
||||
"state_type": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"vars_before": {
|
||||
"properties": {
|
||||
"attempt": {
|
||||
"type": "long"
|
||||
},
|
||||
"reachable": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"state": {
|
||||
"type": "long"
|
||||
},
|
||||
"state_type": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"comment": {
|
||||
"properties": {
|
||||
"__name": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"author": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"entry_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"entry_type": {
|
||||
"type": "long"
|
||||
},
|
||||
"expire_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"host_name": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"legacy_id": {
|
||||
"type": "long"
|
||||
},
|
||||
"name": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"package": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"service_name": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"templates": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"text": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"type": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"zone": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"downtime": {
|
||||
"properties": {
|
||||
"__name": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"author": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"comment": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"config_owner": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"duration": {
|
||||
"type": "long"
|
||||
},
|
||||
"end_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"entry_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"fixed": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"host_name": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"legacy_id": {
|
||||
"type": "long"
|
||||
},
|
||||
"name": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"package": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"scheduled_by": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"service_name": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"start_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"templates": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"trigger_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"triggered_by": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"triggers": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"type": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"was_cancelled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"zone": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"expiry": {
|
||||
"type": "date"
|
||||
},
|
||||
"fields": {
|
||||
"properties": {}
|
||||
},
|
||||
"host": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"meta": {
|
||||
"properties": {
|
||||
"cloud": {
|
||||
"properties": {
|
||||
"availability_zone": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"instance_id": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"machine_type": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"project_id": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"provider": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"region": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"notification_type": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"notify": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"service": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"state": {
|
||||
"type": "long"
|
||||
},
|
||||
"state_type": {
|
||||
"type": "long"
|
||||
},
|
||||
"status": {
|
||||
"properties": {
|
||||
"active_host_checks": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_host_checks_15min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_host_checks_1min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_host_checks_5min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_service_checks": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_service_checks_15min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_service_checks_1min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_service_checks_5min": {
|
||||
"type": "long"
|
||||
},
|
||||
"api": {
|
||||
"properties": {
|
||||
"identity": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"num_conn_endpoints": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_endpoints": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_not_conn_endpoints": {
|
||||
"type": "long"
|
||||
},
|
||||
"zones": {
|
||||
"properties": {
|
||||
"demo": {
|
||||
"properties": {
|
||||
"client_log_lag": {
|
||||
"type": "long"
|
||||
},
|
||||
"connected": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"endpoints": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"parent_zone": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"avg_execution_time": {
|
||||
"type": "long"
|
||||
},
|
||||
"avg_latency": {
|
||||
"type": "long"
|
||||
},
|
||||
"checkercomponent": {
|
||||
"properties": {
|
||||
"checker": {
|
||||
"properties": {
|
||||
"idle": {
|
||||
"type": "long"
|
||||
},
|
||||
"pending": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"filelogger": {
|
||||
"properties": {
|
||||
"main-log": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"icingaapplication": {
|
||||
"properties": {
|
||||
"app": {
|
||||
"properties": {
|
||||
"enable_event_handlers": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_flapping": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_host_checks": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_notifications": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_perfdata": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_service_checks": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"node_name": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"pid": {
|
||||
"type": "long"
|
||||
},
|
||||
"program_start": {
|
||||
"type": "long"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"idomysqlconnection": {
|
||||
"properties": {
|
||||
"ido-mysql": {
|
||||
"properties": {
|
||||
"connected": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"instance_name": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"query_queue_items": {
|
||||
"type": "long"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"max_execution_time": {
|
||||
"type": "long"
|
||||
},
|
||||
"max_latency": {
|
||||
"type": "long"
|
||||
},
|
||||
"min_execution_time": {
|
||||
"type": "long"
|
||||
},
|
||||
"min_latency": {
|
||||
"type": "long"
|
||||
},
|
||||
"notificationcomponent": {
|
||||
"properties": {
|
||||
"notification": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"num_hosts_acknowledged": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_down": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_flapping": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_in_downtime": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_pending": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_unreachable": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_up": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_acknowledged": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_critical": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_flapping": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_in_downtime": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_ok": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_pending": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_unknown": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_unreachable": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_warning": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_host_checks": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_host_checks_15min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_host_checks_1min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_host_checks_5min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_service_checks": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_service_checks_15min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_service_checks_1min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_service_checks_5min": {
|
||||
"type": "long"
|
||||
},
|
||||
"uptime": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"tags": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"text": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
},
|
||||
"timestamp": {
|
||||
"type": "date"
|
||||
},
|
||||
"type": {
|
||||
"ignore_above": 1024,
|
||||
"index": "not_analyzed",
|
||||
"type": "string"
|
||||
},
|
||||
"users": {
|
||||
"index": "analyzed",
|
||||
"norms": {
|
||||
"enabled": false
|
||||
},
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"order": 0,
|
||||
"settings": {
|
||||
"index.refresh_interval": "5s"
|
||||
},
|
||||
"template": "icingabeat-*"
|
||||
}
|
612
icingabeat.template.json
Normal file
612
icingabeat.template.json
Normal file
@ -0,0 +1,612 @@
|
||||
{
|
||||
"mappings": {
|
||||
"_default_": {
|
||||
"_all": {
|
||||
"norms": false
|
||||
},
|
||||
"_meta": {
|
||||
"version": "1.1.0"
|
||||
},
|
||||
"date_detection": false,
|
||||
"dynamic_templates": [
|
||||
{
|
||||
"strings_as_keyword": {
|
||||
"mapping": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"match_mapping_type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"@timestamp": {
|
||||
"type": "date"
|
||||
},
|
||||
"acknowledgement_type": {
|
||||
"type": "long"
|
||||
},
|
||||
"author": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"beat": {
|
||||
"properties": {
|
||||
"hostname": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"name": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"check_result": {
|
||||
"properties": {
|
||||
"active": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"check_source": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"command": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"execution_end": {
|
||||
"type": "date"
|
||||
},
|
||||
"execution_start": {
|
||||
"type": "date"
|
||||
},
|
||||
"exit_status": {
|
||||
"type": "long"
|
||||
},
|
||||
"output": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"performance_data": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"schedule_end": {
|
||||
"type": "date"
|
||||
},
|
||||
"schedule_start": {
|
||||
"type": "date"
|
||||
},
|
||||
"state": {
|
||||
"type": "long"
|
||||
},
|
||||
"type": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"vars_after": {
|
||||
"properties": {
|
||||
"attempt": {
|
||||
"type": "long"
|
||||
},
|
||||
"reachable": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"state": {
|
||||
"type": "long"
|
||||
},
|
||||
"state_type": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"vars_before": {
|
||||
"properties": {
|
||||
"attempt": {
|
||||
"type": "long"
|
||||
},
|
||||
"reachable": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"state": {
|
||||
"type": "long"
|
||||
},
|
||||
"state_type": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"comment": {
|
||||
"properties": {
|
||||
"__name": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"author": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"entry_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"entry_type": {
|
||||
"type": "long"
|
||||
},
|
||||
"expire_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"host_name": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"legacy_id": {
|
||||
"type": "long"
|
||||
},
|
||||
"name": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"package": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"service_name": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"templates": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"text": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"type": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"zone": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"downtime": {
|
||||
"properties": {
|
||||
"__name": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"author": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"comment": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"config_owner": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"duration": {
|
||||
"type": "long"
|
||||
},
|
||||
"end_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"entry_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"fixed": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"host_name": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"legacy_id": {
|
||||
"type": "long"
|
||||
},
|
||||
"name": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"package": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"scheduled_by": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"service_name": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"start_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"templates": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"trigger_time": {
|
||||
"type": "date"
|
||||
},
|
||||
"triggered_by": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"triggers": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"type": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"was_cancelled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"zone": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"expiry": {
|
||||
"type": "date"
|
||||
},
|
||||
"fields": {
|
||||
"properties": {}
|
||||
},
|
||||
"host": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"meta": {
|
||||
"properties": {
|
||||
"cloud": {
|
||||
"properties": {
|
||||
"availability_zone": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"instance_id": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"machine_type": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"project_id": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"provider": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"region": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"notification_type": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"notify": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"service": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"state": {
|
||||
"type": "long"
|
||||
},
|
||||
"state_type": {
|
||||
"type": "long"
|
||||
},
|
||||
"status": {
|
||||
"properties": {
|
||||
"active_host_checks": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_host_checks_15min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_host_checks_1min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_host_checks_5min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_service_checks": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_service_checks_15min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_service_checks_1min": {
|
||||
"type": "long"
|
||||
},
|
||||
"active_service_checks_5min": {
|
||||
"type": "long"
|
||||
},
|
||||
"api": {
|
||||
"properties": {
|
||||
"identity": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"num_conn_endpoints": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_endpoints": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_not_conn_endpoints": {
|
||||
"type": "long"
|
||||
},
|
||||
"zones": {
|
||||
"properties": {
|
||||
"demo": {
|
||||
"properties": {
|
||||
"client_log_lag": {
|
||||
"type": "long"
|
||||
},
|
||||
"connected": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"endpoints": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"parent_zone": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"avg_execution_time": {
|
||||
"type": "long"
|
||||
},
|
||||
"avg_latency": {
|
||||
"type": "long"
|
||||
},
|
||||
"checkercomponent": {
|
||||
"properties": {
|
||||
"checker": {
|
||||
"properties": {
|
||||
"idle": {
|
||||
"type": "long"
|
||||
},
|
||||
"pending": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"filelogger": {
|
||||
"properties": {
|
||||
"main-log": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"icingaapplication": {
|
||||
"properties": {
|
||||
"app": {
|
||||
"properties": {
|
||||
"enable_event_handlers": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_flapping": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_host_checks": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_notifications": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_perfdata": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enable_service_checks": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"node_name": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"pid": {
|
||||
"type": "long"
|
||||
},
|
||||
"program_start": {
|
||||
"type": "long"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"idomysqlconnection": {
|
||||
"properties": {
|
||||
"ido-mysql": {
|
||||
"properties": {
|
||||
"connected": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"instance_name": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"query_queue_items": {
|
||||
"type": "long"
|
||||
},
|
||||
"version": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"max_execution_time": {
|
||||
"type": "long"
|
||||
},
|
||||
"max_latency": {
|
||||
"type": "long"
|
||||
},
|
||||
"min_execution_time": {
|
||||
"type": "long"
|
||||
},
|
||||
"min_latency": {
|
||||
"type": "long"
|
||||
},
|
||||
"notificationcomponent": {
|
||||
"properties": {
|
||||
"notification": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"num_hosts_acknowledged": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_down": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_flapping": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_in_downtime": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_pending": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_unreachable": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_hosts_up": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_acknowledged": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_critical": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_flapping": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_in_downtime": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_ok": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_pending": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_unknown": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_unreachable": {
|
||||
"type": "long"
|
||||
},
|
||||
"num_services_warning": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_host_checks": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_host_checks_15min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_host_checks_1min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_host_checks_5min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_service_checks": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_service_checks_15min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_service_checks_1min": {
|
||||
"type": "long"
|
||||
},
|
||||
"passive_service_checks_5min": {
|
||||
"type": "long"
|
||||
},
|
||||
"uptime": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"tags": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"text": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
},
|
||||
"timestamp": {
|
||||
"type": "date"
|
||||
},
|
||||
"type": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"users": {
|
||||
"norms": false,
|
||||
"type": "text"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"order": 0,
|
||||
"settings": {
|
||||
"index.mapping.total_fields.limit": 10000,
|
||||
"index.refresh_interval": "5s"
|
||||
},
|
||||
"template": "icingabeat-*"
|
||||
}
|
210
icingabeat.yml
210
icingabeat.yml
@ -16,63 +16,53 @@ icingabeat:
|
||||
# Password of the user
|
||||
password: "icinga"
|
||||
|
||||
# Configure SSL verification. If `false` is configured, all server hosts
|
||||
# and certificates will be accepted. In this mode, SSL based connections are
|
||||
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
|
||||
# `true`.
|
||||
ssl.verify: true
|
||||
# Skip SSL verification
|
||||
skip_ssl_verify: false
|
||||
|
||||
# List of root certificates for HTTPS server verifications
|
||||
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||
|
||||
########################### Icingabeat Eventstream ##########################
|
||||
#
|
||||
# Icingabeat supports capturing of an evenstream and periodical polling of the
|
||||
# Icinga status data.
|
||||
eventstream:
|
||||
#
|
||||
# Decide which events to receive from the event stream.
|
||||
# The following event stream types are available:
|
||||
#
|
||||
# * CheckResult
|
||||
# * StateChange
|
||||
# * Notification
|
||||
# * AcknowledgementSet
|
||||
# * AcknowledgementCleared
|
||||
# * CommentAdded
|
||||
# * CommentRemoved
|
||||
# * DowntimeAdded
|
||||
# * DowntimeRemoved
|
||||
# * DowntimeStarted
|
||||
# * DowntimeTriggered
|
||||
#
|
||||
# To disable eventstream, leave the types empty or comment out the option
|
||||
types:
|
||||
- CheckResult
|
||||
- StateChange
|
||||
|
||||
# Decide which events to receive from the event stream.
|
||||
# The following event stream types are available:
|
||||
#
|
||||
# * CheckResult
|
||||
# * StateChange
|
||||
# * Notification
|
||||
# * AcknowledgementSet
|
||||
# * AcknowledgementCleared
|
||||
# * CommentAdded
|
||||
# * CommentRemoved
|
||||
# * DowntimeAdded
|
||||
# * DowntimeRemoved
|
||||
# * DowntimeStarted
|
||||
# * DowntimeTriggered
|
||||
#
|
||||
# To disable eventstream, leave the types empty or comment out the option
|
||||
eventstream.types:
|
||||
- CheckResult
|
||||
- StateChange
|
||||
|
||||
# Event streams can be filtered by attributes using the prefix 'event.'
|
||||
#
|
||||
# Example for the CheckResult type with the exit_code set to 2:
|
||||
# filter: "event.check_result.exit_status==2"
|
||||
#
|
||||
# Example for the CheckResult type with the service matching the string
|
||||
# pattern "mysql*":
|
||||
# filter: 'match("mysql*", event.service)'
|
||||
#
|
||||
# To disable filtering set an empty string or comment out the filter option
|
||||
eventstream.filter: ""
|
||||
# Event streams can be filtered by attributes using the prefix 'event.'
|
||||
#
|
||||
# Example for the CheckResult type with the exit_code set to 2:
|
||||
# filter: "event.check_result.exit_status==2"
|
||||
#
|
||||
# Example for the CheckResult type with the service matching the string
|
||||
# pattern "mysql*":
|
||||
# filter: 'match("mysql*", event.service)'
|
||||
#
|
||||
# To disable filtering set an empty string or comment out the filter option
|
||||
filter: ""
|
||||
|
||||
# Defines how fast to reconnect to the API on connection loss
|
||||
eventstream.retry_interval: 10s
|
||||
retry_interval: 10s
|
||||
|
||||
########################### Icingabeat Statuspoller #########################
|
||||
#
|
||||
# Icingabeat can collect status information about Icinga 2 periodically. Set
|
||||
# an interval at which the status API should be called. Set to 0 to disable
|
||||
# polling.
|
||||
statuspoller.interval: 60s
|
||||
statuspoller:
|
||||
# Interval at which the status API is called. Set to 0 to disable polling.
|
||||
interval: 60s
|
||||
|
||||
# ================================== General ===================================
|
||||
#================================ General =====================================
|
||||
|
||||
# The name of the shipper that publishes the network data. It can be used to group
|
||||
# all the transactions sent by a single shipper in the web interface.
|
||||
@ -87,66 +77,22 @@ icingabeat:
|
||||
#fields:
|
||||
# env: staging
|
||||
|
||||
# ================================= Dashboards =================================
|
||||
# These settings control loading the sample dashboards to the Kibana index. Loading
|
||||
# the dashboards is disabled by default and can be enabled either by setting the
|
||||
# options here or by using the `setup` command.
|
||||
#setup.dashboards.enabled: false
|
||||
#================================ Outputs =====================================
|
||||
|
||||
# The URL from where to download the dashboards archive. By default this URL
|
||||
# has a value which is computed based on the Beat name and version. For released
|
||||
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
|
||||
# website.
|
||||
#setup.dashboards.url:
|
||||
# Configure what outputs to use when sending the data collected by the beat.
|
||||
# Multiple outputs may be used.
|
||||
|
||||
# =================================== Kibana ===================================
|
||||
|
||||
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
|
||||
# This requires a Kibana endpoint configuration.
|
||||
setup.kibana:
|
||||
|
||||
# Kibana Host
|
||||
# Scheme and port can be left out and will be set to the default (http and 5601)
|
||||
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
|
||||
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
|
||||
#host: "localhost:5601"
|
||||
|
||||
# Kibana Space ID
|
||||
# ID of the Kibana Space into which the dashboards should be loaded. By default,
|
||||
# the Default Space will be used.
|
||||
#space.id:
|
||||
|
||||
# =============================== Elastic Cloud ================================
|
||||
|
||||
# These settings simplify using Icingabeat with the Elastic Cloud (https://cloud.elastic.co/).
|
||||
|
||||
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
|
||||
# `setup.kibana.host` options.
|
||||
# You can find the `cloud.id` in the Elastic Cloud web UI.
|
||||
#cloud.id:
|
||||
|
||||
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
|
||||
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
|
||||
#cloud.auth:
|
||||
|
||||
# ================================== Outputs ===================================
|
||||
|
||||
# Configure what output to use when sending the data collected by the beat.
|
||||
|
||||
# ---------------------------- Elasticsearch Output ----------------------------
|
||||
#-------------------------- Elasticsearch output ------------------------------
|
||||
output.elasticsearch:
|
||||
# Array of hosts to connect to.
|
||||
hosts: ["localhost:9200"]
|
||||
|
||||
# Protocol - either `http` (default) or `https`.
|
||||
# Optional protocol and basic auth credentials.
|
||||
#protocol: "https"
|
||||
|
||||
# Authentication credentials - either API key or username/password.
|
||||
#api_key: "id:api_key"
|
||||
#username: "elastic"
|
||||
#password: "changeme"
|
||||
|
||||
# ------------------------------ Logstash Output -------------------------------
|
||||
#----------------------------- Logstash output --------------------------------
|
||||
#output.logstash:
|
||||
# The Logstash hosts
|
||||
#hosts: ["localhost:5044"]
|
||||
@ -161,73 +107,13 @@ output.elasticsearch:
|
||||
# Client Certificate Key
|
||||
#ssl.key: "/etc/pki/client/cert.key"
|
||||
|
||||
# ================================= Processors =================================
|
||||
|
||||
# Configure processors to enhance or manipulate events generated by the beat.
|
||||
|
||||
processors:
|
||||
- add_host_metadata: ~
|
||||
- add_cloud_metadata: ~
|
||||
- add_docker_metadata: ~
|
||||
|
||||
|
||||
# ================================== Logging ===================================
|
||||
#================================ Logging =====================================
|
||||
|
||||
# Sets log level. The default log level is info.
|
||||
# Available log levels are: error, warning, info, debug
|
||||
# Available log levels are: critical, error, warning, info, debug
|
||||
#logging.level: debug
|
||||
|
||||
# At debug level, you can selectively enable logging only for some components.
|
||||
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
|
||||
# "publisher", "service".
|
||||
# "publish", "service".
|
||||
#logging.selectors: ["*"]
|
||||
|
||||
# ============================= X-Pack Monitoring ==============================
|
||||
# Icingabeat can export internal metrics to a central Elasticsearch monitoring
|
||||
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
|
||||
# reporting is disabled by default.
|
||||
|
||||
# Set to true to enable the monitoring reporter.
|
||||
#monitoring.enabled: false
|
||||
|
||||
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
|
||||
# Icingabeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
|
||||
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
|
||||
#monitoring.cluster_uuid:
|
||||
|
||||
# Uncomment to send the metrics to Elasticsearch. Most settings from the
|
||||
# Elasticsearch output are accepted here as well.
|
||||
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
|
||||
# Any setting that is not set is automatically inherited from the Elasticsearch
|
||||
# output configuration, so if you have the Elasticsearch output configured such
|
||||
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
|
||||
# uncomment the following line.
|
||||
#monitoring.elasticsearch:
|
||||
|
||||
# ============================== Instrumentation ===============================
|
||||
|
||||
# Instrumentation support for the icingabeat.
|
||||
#instrumentation:
|
||||
# Set to true to enable instrumentation of icingabeat.
|
||||
#enabled: false
|
||||
|
||||
# Environment in which icingabeat is running on (eg: staging, production, etc.)
|
||||
#environment: ""
|
||||
|
||||
# APM Server hosts to report instrumentation results to.
|
||||
#hosts:
|
||||
# - http://localhost:8200
|
||||
|
||||
# API Key for the APM Server(s).
|
||||
# If api_key is set then secret_token will be ignored.
|
||||
#api_key:
|
||||
|
||||
# Secret token for the APM Server(s).
|
||||
#secret_token:
|
||||
|
||||
|
||||
# ================================= Migration ==================================
|
||||
|
||||
# This allows to enable 6.7 migration aliases
|
||||
#migration.6_to_7.enabled: true
|
||||
|
||||
|
File diff suppressed because one or more lines are too long
@ -1,24 +0,0 @@
|
||||
// Licensed to Elasticsearch B.V. under one or more contributor
|
||||
// license agreements. See the NOTICE file distributed with
|
||||
// this work for additional information regarding copyright
|
||||
// ownership. Elasticsearch B.V. licenses this file to you under
|
||||
// the Apache License, Version 2.0 (the "License"); you may
|
||||
// not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing,
|
||||
// software distributed under the License is distributed on an
|
||||
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
// KIND, either express or implied. See the License for the
|
||||
// specific language governing permissions and limitations
|
||||
// under the License.
|
||||
|
||||
// Code generated by beats/dev-tools/cmd/module_include_list/module_include_list.go - DO NOT EDIT.
|
||||
|
||||
package include
|
||||
|
||||
import (
|
||||
// Import packages that need to register themselves.
|
||||
)
|
118
magefile.go
118
magefile.go
@ -1,118 +0,0 @@
|
||||
// +build mage
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/magefile/mage/mg"
|
||||
|
||||
devtools "github.com/elastic/beats/v7/dev-tools/mage"
|
||||
"github.com/elastic/beats/v7/dev-tools/mage/target/build"
|
||||
"github.com/elastic/beats/v7/dev-tools/mage/target/common"
|
||||
"github.com/elastic/beats/v7/dev-tools/mage/target/pkg"
|
||||
"github.com/elastic/beats/v7/dev-tools/mage/target/unittest"
|
||||
)
|
||||
|
||||
func init() {
|
||||
devtools.SetBuildVariableSources(devtools.DefaultBeatBuildVariableSources)
|
||||
|
||||
devtools.BeatDescription = "Icingabeat fetches data from the Icinga 2 API and forwards it to Elasticsearch or Logstash."
|
||||
devtools.BeatVendor = "Icinga GmbH"
|
||||
devtools.BeatURL = "https://icinga.com/docs/icingabeat"
|
||||
devtools.BeatProjectType = devtools.CommunityProject
|
||||
devtools.CrossBuildMountModcache = true
|
||||
}
|
||||
|
||||
// Package packages the Beat for distribution.
|
||||
// Use SNAPSHOT=true to build snapshots.
|
||||
// Use PLATFORMS to control the target platforms.
|
||||
func Package() {
|
||||
start := time.Now()
|
||||
defer func() { fmt.Println("package ran for", time.Since(start)) }()
|
||||
|
||||
devtools.UseCommunityBeatPackaging()
|
||||
devtools.PackageKibanaDashboardsFromBuildDir()
|
||||
|
||||
mg.Deps(Update)
|
||||
mg.Deps(build.CrossBuild, build.CrossBuildGoDaemon)
|
||||
mg.SerialDeps(devtools.Package, pkg.PackageTest)
|
||||
}
|
||||
|
||||
// Update updates the generated files (aka make update).
|
||||
func Update() {
|
||||
mg.SerialDeps(Fields, Dashboards, Config, includeList, fieldDocs)
|
||||
}
|
||||
|
||||
// Fields generates a fields.yml for the Beat.
|
||||
func Fields() error {
|
||||
return devtools.GenerateFieldsYAML()
|
||||
}
|
||||
|
||||
// Config generates both the short/reference/docker configs.
|
||||
func Config() error {
|
||||
p := devtools.DefaultConfigFileParams()
|
||||
p.Templates = append(p.Templates, "_meta/config/*.tmpl")
|
||||
return devtools.Config(devtools.AllConfigTypes, p, ".")
|
||||
}
|
||||
|
||||
func includeList() error {
|
||||
options := devtools.DefaultIncludeListOptions()
|
||||
options.ImportDirs = []string{"protos/*"}
|
||||
options.ModuleDirs = nil
|
||||
return devtools.GenerateIncludeListGo(options)
|
||||
}
|
||||
|
||||
// Clean cleans all generated files and build artifacts.
|
||||
func Clean() error {
|
||||
return devtools.Clean()
|
||||
}
|
||||
|
||||
// Check formats code, updates generated content, check for common errors, and
|
||||
// checks for any modified files.
|
||||
func Check() {
|
||||
common.Check()
|
||||
}
|
||||
|
||||
// Fmt formats source code (.go and .py) and adds license headers.
|
||||
func Fmt() {
|
||||
common.Fmt()
|
||||
}
|
||||
|
||||
// Test runs all available tests
|
||||
func Test() {
|
||||
mg.Deps(unittest.GoUnitTest)
|
||||
}
|
||||
|
||||
// Build builds the Beat binary.
|
||||
func Build() error {
|
||||
return build.Build()
|
||||
}
|
||||
|
||||
// CrossBuild cross-builds the beat for all target platforms.
|
||||
func CrossBuild() error {
|
||||
return build.CrossBuild()
|
||||
}
|
||||
|
||||
// BuildGoDaemon builds the go-daemon binary (use crossBuildGoDaemon).
|
||||
func BuildGoDaemon() error {
|
||||
return build.BuildGoDaemon()
|
||||
}
|
||||
|
||||
// GolangCrossBuild build the Beat binary inside of the golang-builder.
|
||||
// Do not use directly, use crossBuild instead.
|
||||
func GolangCrossBuild() error {
|
||||
return build.GolangCrossBuild()
|
||||
}
|
||||
|
||||
// Fields generates fields.yml and fields.go files for the Beat.
|
||||
|
||||
func fieldDocs() error {
|
||||
return devtools.Docs.FieldDocs("fields.yml")
|
||||
}
|
||||
|
||||
// Dashboards collects all the dashboards and generates index patterns.
|
||||
func Dashboards() error {
|
||||
return devtools.KibanaDashboards("protos")
|
||||
}
|
7
main.go
7
main.go
@ -3,13 +3,14 @@ package main
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/icinga/icingabeat/cmd"
|
||||
"github.com/elastic/beats/libbeat/beat"
|
||||
|
||||
_ "github.com/icinga/icingabeat/include"
|
||||
"github.com/icinga/icingabeat/beater"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if err := cmd.RootCmd.Execute(); err != nil {
|
||||
err := beat.Run("icingabeat", "", beater.New)
|
||||
if err != nil {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
Binary file not shown.
Before Width: | Height: | Size: 334 KiB |
BIN
screenshots/icingabeat-checkresults-dashboard.png
Normal file
BIN
screenshots/icingabeat-checkresults-dashboard.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 149 KiB |
Binary file not shown.
Before Width: | Height: | Size: 273 KiB |
@ -2,7 +2,6 @@ import sys
|
||||
sys.path.append('../../vendor/github.com/elastic/beats/libbeat/tests/system')
|
||||
from beat.beat import TestCase
|
||||
|
||||
|
||||
class BaseTest(TestCase):
|
||||
|
||||
@classmethod
|
||||
|
@ -10,10 +10,10 @@ class Test(BaseTest):
|
||||
Basic test with exiting Icingabeat normally
|
||||
"""
|
||||
self.render_config_template(
|
||||
path=os.path.abspath(self.working_dir) + "/log/*"
|
||||
path=os.path.abspath(self.working_dir) + "/log/*"
|
||||
)
|
||||
|
||||
icingabeat_proc = self.start_beat()
|
||||
self.wait_until(lambda: self.log_contains("icingabeat is running"))
|
||||
self.wait_until( lambda: self.log_contains("icingabeat is running"))
|
||||
exit_code = icingabeat_proc.kill_and_wait()
|
||||
assert exit_code == 0
|
||||
|
107
vendor/github.com/elastic/beats/.appveyor.yml
generated
vendored
Normal file
107
vendor/github.com/elastic/beats/.appveyor.yml
generated
vendored
Normal file
@ -0,0 +1,107 @@
|
||||
# Version format
|
||||
version: "{build}"
|
||||
|
||||
# Operating system (build VM template)
|
||||
os: Windows Server 2012 R2
|
||||
|
||||
# Environment variables
|
||||
environment:
|
||||
GOROOT: c:\go1.7.4
|
||||
GOPATH: c:\gopath
|
||||
PYWIN_DL: https://beats-files.s3.amazonaws.com/deps/pywin32-220.win32-py2.7.exe
|
||||
matrix:
|
||||
- PROJ: github.com\elastic\beats\metricbeat
|
||||
BEAT: metricbeat
|
||||
- PROJ: github.com\elastic\beats\filebeat
|
||||
BEAT: filebeat
|
||||
- PROJ: github.com\elastic\beats\winlogbeat
|
||||
BEAT: winlogbeat
|
||||
|
||||
# Custom clone folder (variables are not expanded here).
|
||||
clone_folder: c:\gopath\src\github.com\elastic\beats
|
||||
|
||||
# Cache mingw install until appveyor.yml is modified.
|
||||
cache:
|
||||
- C:\ProgramData\chocolatey\bin -> .appveyor.yml
|
||||
- C:\ProgramData\chocolatey\lib -> .appveyor.yml
|
||||
- C:\go1.7.4 -> .appveyor.yml
|
||||
- C:\tools\mingw64 -> .appveyor.yml
|
||||
- C:\pywin_inst.exe -> .appveyor.yml
|
||||
|
||||
# Scripts that run after cloning repository
|
||||
install:
|
||||
- ps: c:\gopath\src\github.com\elastic\beats\libbeat\scripts\install-go.ps1 -version 1.7.4
|
||||
- set PATH=%GOROOT%\bin;%PATH%
|
||||
# AppVeyor installed mingw is 32-bit only.
|
||||
- ps: >-
|
||||
if(!(Test-Path "C:\tools\mingw64\bin\gcc.exe")) {
|
||||
cinst mingw > mingw-install.txt
|
||||
Push-AppveyorArtifact mingw-install.txt
|
||||
}
|
||||
- set PATH=C:\tools\mingw64\bin;%GOROOT%\bin;%PATH%
|
||||
- set PATH=%GOPATH%\bin;%PATH%
|
||||
- go install github.com/elastic/beats/vendor/github.com/pierrre/gotestcover
|
||||
- go version
|
||||
- go env
|
||||
# Download the PyWin32 installer if it is not cached.
|
||||
- ps: >-
|
||||
if(!(Test-Path "C:\pywin_inst.exe")) {
|
||||
(new-object net.webclient).DownloadFile("$env:PYWIN_DL", 'C:/pywin_inst.exe')
|
||||
}
|
||||
- set PYTHONPATH=C:\Python27
|
||||
- set PATH=%PYTHONPATH%;%PYTHONPATH%\Scripts;%PATH%
|
||||
- python --version
|
||||
- pip install jinja2 nose nose-timer PyYAML redis elasticsearch
|
||||
- easy_install C:/pywin_inst.exe
|
||||
|
||||
# To run your custom scripts instead of automatic MSBuild
|
||||
build_script:
|
||||
# Compile
|
||||
- appveyor AddCompilationMessage "Starting Compile"
|
||||
- ps: cd $env:BEAT
|
||||
- go build
|
||||
- appveyor AddCompilationMessage "Compile Success" -FileName "%BEAT%.exe"
|
||||
|
||||
# To run your custom scripts instead of automatic tests
|
||||
test_script:
|
||||
# Unit tests
|
||||
- ps: Add-AppveyorTest "Unit Tests" -Outcome Running
|
||||
- mkdir build\coverage
|
||||
- gotestcover -race -coverprofile=build/coverage/integration.cov github.com/elastic/beats/%BEAT%/...
|
||||
- ps: Update-AppveyorTest "Unit Tests" -Outcome Passed
|
||||
# System tests
|
||||
- ps: Add-AppveyorTest "System tests" -Outcome Running
|
||||
- go test -race -c -cover -covermode=atomic -coverpkg ./...
|
||||
- ps: |
|
||||
if ($env:BEAT -eq "metricbeat") {
|
||||
cp .\_meta\fields.common.yml .\_meta\fields.generated.yml
|
||||
python .\scripts\fields_collector.py | out-file -append -encoding UTF8 -filepath .\_meta\fields.generated.yml
|
||||
}
|
||||
- ps: cd tests/system
|
||||
- nosetests --with-timer
|
||||
- ps: Update-AppveyorTest "System tests" -Outcome Passed
|
||||
|
||||
after_test:
|
||||
- ps: cd $env:GOPATH\src\$env:PROJ
|
||||
- python ..\dev-tools\aggregate_coverage.py -o build\coverage\system.cov .\build\system-tests\run
|
||||
- python ..\dev-tools\aggregate_coverage.py -o build\coverage\full.cov .\build\coverage
|
||||
- go tool cover -html=build\coverage\full.cov -o build\coverage\full.html
|
||||
- ps: Push-AppveyorArtifact build\coverage\full.cov
|
||||
- ps: Push-AppveyorArtifact build\coverage\full.html
|
||||
# Upload coverage report.
|
||||
- "SET PATH=C:\\Python34;C:\\Python34\\Scripts;%PATH%"
|
||||
- pip install codecov
|
||||
- ps: cd $env:GOPATH\src\github.com\elastic\beats
|
||||
- codecov -X gcov -f "%BEAT%\build\coverage\full.cov"
|
||||
|
||||
# Executes for both successful and failed builds
|
||||
on_finish:
|
||||
- ps: cd $env:GOPATH\src\$env:PROJ
|
||||
- 7z a -r system-tests-output.zip build\system-tests\run
|
||||
- ps: Push-AppveyorArtifact system-tests-output.zip
|
||||
|
||||
# To disable deployment
|
||||
deploy: off
|
||||
|
||||
# Notifications should only be setup using the AppVeyor UI so that
|
||||
# forks can be created without inheriting the settings.
|
27
vendor/github.com/elastic/beats/.editorconfig
generated
vendored
Normal file
27
vendor/github.com/elastic/beats/.editorconfig
generated
vendored
Normal file
@ -0,0 +1,27 @@
|
||||
# See: http://editorconfig.org
|
||||
root = true
|
||||
|
||||
[*]
|
||||
charset = utf-8
|
||||
end_of_line = lf
|
||||
insert_final_newline = true
|
||||
trim_trailing_whitespace = true
|
||||
|
||||
[*.json]
|
||||
indent_size = 4
|
||||
indent_style = space
|
||||
|
||||
[*.py]
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
|
||||
[*.yml]
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
|
||||
[Makefile]
|
||||
indent_style = tab
|
||||
|
||||
[Vagrantfile]
|
||||
indent_size = 2
|
||||
indent_style = space
|
6
vendor/github.com/elastic/beats/.gitattributes
generated
vendored
Normal file
6
vendor/github.com/elastic/beats/.gitattributes
generated
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
CHANGELOG.md merge=union
|
||||
CHANGELOG.asciidoc merge=union
|
||||
|
||||
# Keep these file types as CRLF (Windows).
|
||||
*.bat text eol=crlf
|
||||
*.cmd text eol=crlf
|
11
vendor/github.com/elastic/beats/.github/ISSUE_TEMPLATE.md
generated
vendored
Normal file
11
vendor/github.com/elastic/beats/.github/ISSUE_TEMPLATE.md
generated
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
Please post all questions and issues on https://discuss.elastic.co/c/beats
|
||||
before opening a Github Issue. Your questions will reach a wider audience there,
|
||||
and if we confirm that there is a bug, then you can open a new issue.
|
||||
|
||||
For security vulnerabilities please only send reports to security@elastic.co.
|
||||
See https://www.elastic.co/community/security for more information.
|
||||
|
||||
For confirmed bugs, please report:
|
||||
- Version:
|
||||
- Operating System:
|
||||
- Steps to Reproduce:
|
29
vendor/github.com/elastic/beats/.gitignore
generated
vendored
Normal file
29
vendor/github.com/elastic/beats/.gitignore
generated
vendored
Normal file
@ -0,0 +1,29 @@
|
||||
# Directories
|
||||
/.vagrant
|
||||
/.idea
|
||||
/build
|
||||
/*/data
|
||||
/*/logs
|
||||
/*/_meta/kibana/index-pattern
|
||||
|
||||
# Files
|
||||
.DS_Store
|
||||
/glide.lock
|
||||
/beats.iml
|
||||
*.dev.yml
|
||||
*.generated.yml
|
||||
coverage.out
|
||||
|
||||
# Editor swap files
|
||||
*.swp
|
||||
*.swo
|
||||
*.swn
|
||||
|
||||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
*.exe
|
||||
*.test
|
||||
*.prof
|
||||
*.pyc
|
116
vendor/github.com/elastic/beats/.travis.yml
generated
vendored
Normal file
116
vendor/github.com/elastic/beats/.travis.yml
generated
vendored
Normal file
@ -0,0 +1,116 @@
|
||||
sudo: required
|
||||
dist: trusty
|
||||
services:
|
||||
- docker
|
||||
|
||||
language: go
|
||||
|
||||
# Make sure project can also be built on travis for clones of the repo
|
||||
go_import_path: github.com/elastic/beats
|
||||
|
||||
env:
|
||||
global:
|
||||
# Cross-compile for amd64 only to speed up testing.
|
||||
- GOX_FLAGS="-arch amd64"
|
||||
- DOCKER_COMPOSE_VERSION: 1.9.0
|
||||
- &go_version 1.7.4
|
||||
|
||||
matrix:
|
||||
include:
|
||||
# General checks
|
||||
- os: linux
|
||||
env: TARGETS="check"
|
||||
go: *go_version
|
||||
|
||||
# Filebeat
|
||||
- os: linux
|
||||
env: TARGETS="-C filebeat testsuite"
|
||||
go: *go_version
|
||||
- os: osx
|
||||
env: TARGETS="TEST_ENVIRONMENT=0 -C filebeat testsuite"
|
||||
go: *go_version
|
||||
|
||||
# Heartbeat
|
||||
- os: linux
|
||||
env: TARGETS="-C heartbeat testsuite"
|
||||
go: *go_version
|
||||
- os: osx
|
||||
env: TARGETS="TEST_ENVIRONMENT=0 -C heartbeat testsuite"
|
||||
go: *go_version
|
||||
|
||||
# Libbeat
|
||||
- os: linux
|
||||
env: TARGETS="-C libbeat testsuite"
|
||||
go: *go_version
|
||||
- os: linux
|
||||
env: TARGETS="-C libbeat crosscompile"
|
||||
go: *go_version
|
||||
|
||||
# Metricbeat
|
||||
- os: linux
|
||||
env: TARGETS="-C metricbeat testsuite"
|
||||
go: *go_version
|
||||
- os: osx
|
||||
env: TARGETS="TEST_ENVIRONMENT=0 -C metricbeat testsuite"
|
||||
go: *go_version
|
||||
- os: linux
|
||||
env: TARGETS="-C metricbeat crosscompile"
|
||||
go: *go_version
|
||||
|
||||
# Packetbeat
|
||||
- os: linux
|
||||
env: TARGETS="-C packetbeat testsuite"
|
||||
go: *go_version
|
||||
|
||||
# Winlogbeat
|
||||
- os: linux
|
||||
env: TARGETS="-C winlogbeat crosscompile"
|
||||
go: *go_version
|
||||
|
||||
# Dashboards
|
||||
- os: linux
|
||||
env: TARGETS="-C libbeat/dashboards"
|
||||
go: *go_version
|
||||
|
||||
# Generators
|
||||
- os: linux
|
||||
env: TARGETS="-C generator/metricbeat test"
|
||||
go: *go_version
|
||||
- os: linux
|
||||
env: TARGETS="-C generator/beat test"
|
||||
go: *go_version
|
||||
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- python-virtualenv
|
||||
- libpcap-dev
|
||||
- geoip-database
|
||||
|
||||
before_install:
|
||||
- umask 022
|
||||
- chmod -R go-w $GOPATH/src/github.com/elastic/beats
|
||||
# Docker-compose installation
|
||||
- sudo rm /usr/local/bin/docker-compose || true
|
||||
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
|
||||
- chmod +x docker-compose
|
||||
- sudo mv docker-compose /usr/local/bin
|
||||
|
||||
# Skips installations step
|
||||
install: true
|
||||
|
||||
script:
|
||||
- make $TARGETS
|
||||
|
||||
notifications:
|
||||
slack:
|
||||
rooms:
|
||||
secure: "e25J5puEA31dOooTI4T+K+zrTs8XeWIGq2cgmiPt9u/g7eqWeQj1UJnVsr8GOu1RPDyuJZJHXqfrvuOYJTdHzXbwjD0JTbwwVVZMkkZW2SWZHG46HCXPiucjWXEr3hXJKBJDDpIx6VxrN7r17dejv1biQ8QuEFZfiB1H8kbH/ho="
|
||||
|
||||
after_success:
|
||||
# Copy full.cov to coverage.txt because codecov.io requires this file
|
||||
- test -f filebeat/build/coverage/full.cov && bash <(curl -s https://codecov.io/bash) -f filebeat/build/coverage/full.cov
|
||||
- test -f heartbeat/build/coverage/full.cov && bash <(curl -s https://codecov.io/bash) -f heartbeat/build/coverage/full.cov
|
||||
- test -f libbeat/build/coverage/full.cov && bash <(curl -s https://codecov.io/bash) -f libbeat/build/coverage/full.cov
|
||||
- test -f metricbeat/build/coverage/full.cov && bash <(curl -s https://codecov.io/bash) -f metricbeat/build/coverage/full.cov
|
||||
- test -f packetbeat/build/coverage/full.cov && bash <(curl -s https://codecov.io/bash) -f packetbeat/build/coverage/full.cov
|
1502
vendor/github.com/elastic/beats/CHANGELOG.asciidoc
generated
vendored
Normal file
1502
vendor/github.com/elastic/beats/CHANGELOG.asciidoc
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
118
vendor/github.com/elastic/beats/CONTRIBUTING.md
generated
vendored
Normal file
118
vendor/github.com/elastic/beats/CONTRIBUTING.md
generated
vendored
Normal file
@ -0,0 +1,118 @@
|
||||
Please post all questions and issues first on
|
||||
[https://discuss.elastic.co/c/beats](https://discuss.elastic.co/c/beats)
|
||||
before opening a Github Issue.
|
||||
|
||||
# Contributing to Beats
|
||||
|
||||
The Beats are open source and we love to receive contributions from our
|
||||
community — you!
|
||||
|
||||
There are many ways to contribute, from writing tutorials or blog posts,
|
||||
improving the documentation, submitting bug reports and feature requests or
|
||||
writing code for implementing a whole new protocol.
|
||||
|
||||
If you have a bugfix or new feature that you would like to contribute, please
|
||||
start by opening a topic on the [forums](https://discuss.elastic.co/c/beats).
|
||||
It may be that somebody is already working on it, or that there are particular
|
||||
issues that you should know about before implementing the change.
|
||||
|
||||
We enjoy working with contributors to get their code accepted. There are many
|
||||
approaches to fixing a problem and it is important to find the best approach
|
||||
before writing too much code.
|
||||
|
||||
The process for contributing to any of the Elastic repositories is similar.
|
||||
|
||||
## Contribution Steps
|
||||
|
||||
1. Please make sure you have signed our [Contributor License
|
||||
Agreement](https://www.elastic.co/contributor-agreement/). We are not
|
||||
asking you to assign copyright to us, but to give us the right to distribute
|
||||
your code without restriction. We ask this of all contributors in order to
|
||||
assure our users of the origin and continuing existence of the code. You
|
||||
only need to sign the CLA once.
|
||||
2. Send a pull request! Push your changes to your fork of the repository and
|
||||
[submit a pull
|
||||
request](https://help.github.com/articles/using-pull-requests). In the pull
|
||||
request, describe what your changes do and mention any bugs/issues related
|
||||
to the pull request.
|
||||
|
||||
|
||||
## Adding a new Beat
|
||||
|
||||
If you want to create a new Beat, please read our [developer
|
||||
guide](https://www.elastic.co/guide/en/beats/libbeat/current/new-beat.html).
|
||||
You don't need to submit the code to this repository. Most new Beats start in
|
||||
their own repository and just make use of the libbeat packages. After you have
|
||||
a working Beat that you'd like to share with others, open a PR to add it to our
|
||||
list of [community
|
||||
Beats](https://github.com/elastic/beats/blob/master/libbeat/docs/communitybeats.asciidoc).
|
||||
|
||||
## Setting up your dev environment
|
||||
|
||||
The Beats are Go programs, so install the latest version of
|
||||
[golang](http://golang.org/) if you don't have it already. The current Go version
|
||||
used for development is Golang 1.7.4.
|
||||
|
||||
The location where you clone is important. Please clone under the source
|
||||
directory of your `GOPATH`. If you don't have `GOPATH` already set, you can
|
||||
simply set it to your home directory (`export GOPATH=$HOME`).
|
||||
|
||||
$ mkdir -p ${GOPATH}/src/github.com/elastic
|
||||
$ cd ${GOPATH}/src/github.com/elastic
|
||||
$ git clone https://github.com/elastic/beats.git
|
||||
|
||||
Note: If you have multiple go paths use `${GOPATH%%:*}`instead of `${GOPATH}`.
|
||||
|
||||
Then you can compile a particular Beat by using the Makefile. For example, for
|
||||
Packetbeat:
|
||||
|
||||
$ cd beats/packetbeat
|
||||
$ make
|
||||
|
||||
Some of the Beats might have extra development requirements, in which case you'll find a
|
||||
CONTRIBUTING.md file in the Beat directory.
|
||||
|
||||
## Update scripts
|
||||
|
||||
The Beats use a variety of scripts based on Python to generate configuration files
|
||||
and documentation. The command used for this is:
|
||||
|
||||
$ make update
|
||||
|
||||
This command has the following dependencies:
|
||||
|
||||
* Python >=2.7.9
|
||||
* [virtualenv](https://virtualenv.pypa.io/en/latest/) for Python
|
||||
|
||||
Virtualenv can be installed with the command `easy_install virtualenv` or `pip install virtualenv`.
|
||||
More details can be found [here](https://virtualenv.pypa.io/en/latest/installation.html).
|
||||
|
||||
|
||||
## Testing
|
||||
|
||||
You can run the whole testsuite with the following command:
|
||||
|
||||
$ make testsuite
|
||||
|
||||
Running the testsuite has the following requirements:
|
||||
|
||||
* Python >=2.7.9
|
||||
* Docker >=1.10.0
|
||||
* Docker-compose >= 1.8.0
|
||||
|
||||
|
||||
## Documentation
|
||||
|
||||
The documentation for each Beat is located under {beatname}/docs and is based on asciidoc. After changing the docs,
|
||||
you should verify that the docs are still building to avoid breaking the automated docs build. To build the docs run
|
||||
`make docs`. If you want to preview the docs for a specific Beat, run `make docs-preview`
|
||||
inside the folder for the Beat. This will automatically open your browser with the docs for preview.
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
To manage the `vendor/` folder we use
|
||||
[glide](https://github.com/Masterminds/glide), which uses
|
||||
[glide.yaml](glide.yaml) as a manifest file for the dependencies. Please see
|
||||
the glide documentation on how to add or update vendored dependencies.
|
||||
|
15
vendor/github.com/elastic/beats/Dockerfile
generated
vendored
Normal file
15
vendor/github.com/elastic/beats/Dockerfile
generated
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
FROM golang:1.7.4
|
||||
MAINTAINER Nicolas Ruflin <ruflin@elastic.co>
|
||||
|
||||
RUN set -x && \
|
||||
apt-get update && \
|
||||
apt-get install -y netcat && \
|
||||
apt-get clean
|
||||
|
||||
COPY libbeat/scripts/docker-entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN mkdir -p /etc/pki/tls/certs
|
||||
COPY testing/environments/docker/logstash/pki/tls/certs/logstash.crt /etc/pki/tls/certs/logstash.crt
|
||||
|
||||
# Create a copy of the repository inside the container.
|
||||
COPY . /go/src/github.com/elastic/beats/
|
13
vendor/github.com/elastic/beats/LICENSE
generated
vendored
Normal file
13
vendor/github.com/elastic/beats/LICENSE
generated
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
Copyright (c) 2012–2016 Elasticsearch <http://www.elastic.co>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
120
vendor/github.com/elastic/beats/Makefile
generated
vendored
Normal file
120
vendor/github.com/elastic/beats/Makefile
generated
vendored
Normal file
@ -0,0 +1,120 @@
|
||||
|
||||
BUILD_DIR=build
|
||||
COVERAGE_DIR=${BUILD_DIR}/coverage
|
||||
BEATS=packetbeat filebeat winlogbeat metricbeat heartbeat
|
||||
PROJECTS=libbeat ${BEATS}
|
||||
PROJECTS_ENV=libbeat filebeat metricbeat
|
||||
SNAPSHOT?=yes
|
||||
|
||||
# Runs complete testsuites (unit, system, integration) for all beats with coverage and race detection.
|
||||
# Also it builds the docs and the generators
|
||||
.PHONY: testsuite
|
||||
testsuite:
|
||||
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) testsuite || exit 1;)
|
||||
#$(MAKE) -C generator test
|
||||
|
||||
stop-environments:
|
||||
$(foreach var,$(PROJECTS_ENV),$(MAKE) -C $(var) stop-environment || exit 0;)
|
||||
|
||||
# Runs unit and system tests without coverage and race detection.
|
||||
.PHONY: test
|
||||
test:
|
||||
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) test || exit 1;)
|
||||
|
||||
# Runs unit tests without coverage and race detection.
|
||||
.PHONY: unit
|
||||
unit:
|
||||
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) unit || exit 1;)
|
||||
|
||||
.PHONY: coverage-report
|
||||
coverage-report:
|
||||
mkdir -p ${COVERAGE_DIR}
|
||||
# Writes atomic mode on top of file
|
||||
echo 'mode: atomic' > ./${COVERAGE_DIR}/full.cov
|
||||
# Collects all coverage files and skips top line with mode
|
||||
-tail -q -n +2 ./filebeat/${COVERAGE_DIR}/*.cov >> ./${COVERAGE_DIR}/full.cov
|
||||
-tail -q -n +2 ./packetbeat/${COVERAGE_DIR}/*.cov >> ./${COVERAGE_DIR}/full.cov
|
||||
-tail -q -n +2 ./winlogbeat/${COVERAGE_DIR}/*.cov >> ./${COVERAGE_DIR}/full.cov
|
||||
-tail -q -n +2 ./libbeat/${COVERAGE_DIR}/*.cov >> ./${COVERAGE_DIR}/full.cov
|
||||
go tool cover -html=./${COVERAGE_DIR}/full.cov -o ${COVERAGE_DIR}/full.html
|
||||
|
||||
.PHONY: update
|
||||
update:
|
||||
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) update || exit 1;)
|
||||
|
||||
.PHONY: clean
|
||||
clean:
|
||||
rm -rf build
|
||||
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) clean || exit 1;)
|
||||
$(MAKE) -C generator clean
|
||||
|
||||
# Cleans up the vendor directory from unnecessary files
|
||||
# This should always be run after updating the dependencies
|
||||
.PHONY: clean-vendor
|
||||
clean-vendor:
|
||||
sh script/clean_vendor.sh
|
||||
|
||||
.PHONY: check
|
||||
check:
|
||||
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) check || exit 1;)
|
||||
# Validate that all updates were committed
|
||||
$(MAKE) update
|
||||
git update-index --refresh
|
||||
git diff-index --exit-code HEAD --
|
||||
|
||||
.PHONY: fmt
|
||||
fmt:
|
||||
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) fmt || exit 1;)
|
||||
|
||||
.PHONY: simplify
|
||||
simplify:
|
||||
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) simplify || exit 1;)
|
||||
|
||||
# Collects all dashboards and generates dashboard folder for https://github.com/elastic/beats-dashboards/tree/master/dashboards
|
||||
.PHONY: beats-dashboards
|
||||
beats-dashboards:
|
||||
mkdir -p build/dashboards
|
||||
$(foreach var,$(BEATS),cp -r $(var)/_meta/kibana/ build/dashboards/$(var) || exit 1;)
|
||||
|
||||
# Builds the documents for each beat
|
||||
.PHONY: docs
|
||||
docs:
|
||||
sh libbeat/scripts/build_docs.sh ${PROJECTS}
|
||||
|
||||
.PHONY: package
|
||||
package: update beats-dashboards
|
||||
$(foreach var,$(BEATS),SNAPSHOT=$(SNAPSHOT) $(MAKE) -C $(var) package || exit 1;)
|
||||
|
||||
# build the dashboards package
|
||||
echo "Start building the dashboards package"
|
||||
mkdir -p build/upload/
|
||||
BUILD_DIR=${shell pwd}/build SNAPSHOT=$(SNAPSHOT) $(MAKE) -C dev-tools/packer package-dashboards ${shell pwd}/build/upload/build_id.txt
|
||||
mv build/upload build/dashboards-upload
|
||||
|
||||
# Copy build files over to top build directory
|
||||
mkdir -p build/upload/
|
||||
$(foreach var,$(BEATS),cp -r $(var)/build/upload/ build/upload/$(var) || exit 1;)
|
||||
cp -r build/dashboards-upload build/upload/dashboards
|
||||
# Run tests on the generated packages.
|
||||
go test ./dev-tools/package_test.go -files "${shell pwd}/build/upload/*/*"
|
||||
|
||||
# Upload nightly builds to S3
|
||||
.PHONY: upload-nightlies-s3
|
||||
upload-nightlies-s3: all
|
||||
aws s3 cp --recursive --acl public-read build/upload s3://beats-nightlies
|
||||
|
||||
# Run after building to sign packages and publish to APT and YUM repos.
|
||||
.PHONY: package-upload
|
||||
upload-package:
|
||||
$(MAKE) -C dev-tools/packer deb-rpm-s3
|
||||
# You must export AWS_ACCESS_KEY=<AWS access> and export AWS_SECRET_KEY=<secret>
|
||||
# before running this make target.
|
||||
dev-tools/packer/docker/deb-rpm-s3/deb-rpm-s3.sh
|
||||
|
||||
.PHONY: release-upload
|
||||
upload-release:
|
||||
aws s3 cp --recursive --acl public-read build/upload s3://download.elasticsearch.org/beats/
|
||||
|
||||
.PHONY: notice
|
||||
notice:
|
||||
python dev-tools/generate_notice.py .
|
1994
vendor/github.com/elastic/beats/NOTICE
generated
vendored
Normal file
1994
vendor/github.com/elastic/beats/NOTICE
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
78
vendor/github.com/elastic/beats/README.md
generated
vendored
Normal file
78
vendor/github.com/elastic/beats/README.md
generated
vendored
Normal file
@ -0,0 +1,78 @@
|
||||
[](https://travis-ci.org/elastic/beats)
|
||||
[](https://ci.appveyor.com/project/elastic-beats/beats/branch/master)
|
||||
[](http://goreportcard.com/report/elastic/beats)
|
||||
[](https://codecov.io/github/elastic/beats?branch=master)
|
||||
|
||||
# Beats - The Lightweight Shippers of the Elastic Stack
|
||||
|
||||
The [Beats](https://www.elastic.co/products/beats) are lightweight data
|
||||
shippers, written in Go, that you install on your servers to capture all sorts
|
||||
of operational data (think of logs, metrics, or network packet data). The Beats
|
||||
send the operational data to Elasticsearch, either directly or via Logstash, so
|
||||
it can be visualized with Kibana.
|
||||
|
||||
By "lightweight", we mean that Beats have a small installation footprint, use
|
||||
limited system resources, and have no runtime dependencies.
|
||||
|
||||
This repository contains
|
||||
[libbeat](https://github.com/elastic/beats/tree/master/libbeat), our Go
|
||||
framework for creating Beats, and all the officially supported Beats:
|
||||
|
||||
Beat | Description
|
||||
--- | ---
|
||||
[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) | Tails and ships log files
|
||||
[Heartbeat](https://github.com/elastic/beats/tree/master/heartbeat) | Ping remote services for availability
|
||||
[Metricbeat](https://github.com/elastic/beats/tree/master/metricbeat) | Fetches sets of metrics from the operating system and services
|
||||
[Packetbeat](https://github.com/elastic/beats/tree/master/packetbeat) | Monitors the network and applications by sniffing packets
|
||||
[Winlogbeat](https://github.com/elastic/beats/tree/master/winlogbeat) | Fetches and ships Windows Event logs
|
||||
|
||||
In addition to the above Beats, which are officially supported by
|
||||
[Elastic](elastic.co), the
|
||||
community has created a set of other Beats that make use of libbeat but live
|
||||
outside of this Github repository. We maintain a list of community Beats
|
||||
[here](https://www.elastic.co/guide/en/beats/libbeat/master/community-beats.html).
|
||||
|
||||
## Documentation and Getting Started
|
||||
|
||||
You can find the documentation and getting started guides for each of the Beats
|
||||
on the [elastic.co site](https://www.elastic.co/guide/):
|
||||
|
||||
* [Beats platform](https://www.elastic.co/guide/en/beats/libbeat/current/index.html)
|
||||
* [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/index.html)
|
||||
* [Heartbeat](https://www.elastic.co/guide/en/beats/heartbeat/current/index.html)
|
||||
* [Metricbeat](https://www.elastic.co/guide/en/beats/metricbeat/current/index.html)
|
||||
* [Packetbeat](https://www.elastic.co/guide/en/beats/packetbeat/current/index.html)
|
||||
* [Winlogbeat](https://www.elastic.co/guide/en/beats/winlogbeat/current/index.html)
|
||||
|
||||
|
||||
## Getting Help
|
||||
|
||||
If you need help or hit an issue, please start by opening a topic on our
|
||||
[discuss forums](https://discuss.elastic.co/c/beats). Please note that we
|
||||
reserve GitHub tickets for confirmed bugs and enhancement requests.
|
||||
|
||||
## Downloads
|
||||
|
||||
You can download pre-compiled Beats binaries, as well as packages for the
|
||||
supported platforms, from [this page](https://www.elastic.co/downloads/beats).
|
||||
|
||||
## Contributing
|
||||
|
||||
We'd love working with you! You can help make the Beats better in many ways:
|
||||
report issues, help us reproduce issues, fix bugs, add functionality, or even
|
||||
create your own Beat.
|
||||
|
||||
Please start by reading our [CONTRIBUTING](CONTRIBUTING.md) file.
|
||||
|
||||
If you are creating a new Beat, you don't need to submit the code to this
|
||||
repository. You can simply start working in a new repository and make use of
|
||||
the libbeat packages, by following our [developer
|
||||
guide](https://www.elastic.co/guide/en/beats/libbeat/master/new-beat.html).
|
||||
After you have a working prototype, open a pull request to add your Beat to the
|
||||
list of [community
|
||||
Beats](https://github.com/elastic/beats/blob/master/libbeat/docs/communitybeats.asciidoc).
|
||||
|
||||
## Building Beats from the Source
|
||||
|
||||
See our [CONTRIBUTING](CONTRIBUTING.md) file for information about setting up your dev
|
||||
environment to build Beats from the source.
|
115
vendor/github.com/elastic/beats/Vagrantfile
generated
vendored
Normal file
115
vendor/github.com/elastic/beats/Vagrantfile
generated
vendored
Normal file
@ -0,0 +1,115 @@
|
||||
### Documentation
|
||||
# This is a Vagrantfile for Beats development.
|
||||
#
|
||||
# Boxes
|
||||
# =====
|
||||
#
|
||||
# win2012
|
||||
# -------
|
||||
# This box is used as a Windows development and testing environment for Beats.
|
||||
#
|
||||
# Usage and Features:
|
||||
# - Two users exist: Administrator and Vagrant. Both have the password: vagrant
|
||||
# - Use 'vagrant ssh' to open a Windows command prompt.
|
||||
# - Use 'vagrant rdp' to open a Windows Remote Deskop session. Mac users must
|
||||
# install the Microsoft Remote Desktop Client from the App Store.
|
||||
# - There is a desktop shortcut labeled "Beats Shell" that opens a command prompt
|
||||
# to C:\Gopath\src\github.com\elastic\beats where the code is mounted.
|
||||
#
|
||||
# solaris
|
||||
# -------------------
|
||||
# - Use gmake instead of make.
|
||||
#
|
||||
# freebsd and openbsd
|
||||
# -------------------
|
||||
# - Use gmake instead of make.
|
||||
# - Folder syncing doesn't work well. Consider copying the files into the box or
|
||||
# cloning the project inside the box.
|
||||
|
||||
# Provisioning for Windows PowerShell
|
||||
$winPsProvision = <<SCRIPT
|
||||
echo 'Creating github.com\elastic in the GOPATH'
|
||||
New-Item -itemtype directory -path "C:\\Gopath\\src\\github.com\\elastic" -force
|
||||
echo "Symlinking C:\\Vagrant to C:\\Gopath\\src\\github.com\\elastic"
|
||||
cmd /c mklink /d C:\\Gopath\\src\\github.com\\elastic\\beats \\\\vboxsvr\\vagrant
|
||||
|
||||
echo "Creating Beats Shell desktop shortcut"
|
||||
$WshShell = New-Object -comObject WScript.Shell
|
||||
$Shortcut = $WshShell.CreateShortcut("$Home\\Desktop\\Beats Shell.lnk")
|
||||
$Shortcut.TargetPath = "cmd.exe"
|
||||
$Shortcut.Arguments = "/K cd /d C:\\Gopath\\src\\github.com\\elastic\\beats"
|
||||
$Shortcut.Save()
|
||||
|
||||
echo "Disable automatic updates"
|
||||
$AUSettigns = (New-Object -com "Microsoft.Update.AutoUpdate").Settings
|
||||
$AUSettigns.NotificationLevel = 1
|
||||
$AUSettigns.Save()
|
||||
SCRIPT
|
||||
|
||||
# Provisioning for Unix/Linux
|
||||
$unixProvision = <<SCRIPT
|
||||
echo 'Creating github.com/elastic in the GOPATH'
|
||||
mkdir -p ~/go/src/github.com/elastic
|
||||
echo 'Symlinking /vagrant to ~/go/src/github.com/elastic'
|
||||
cd ~/go/src/github.com/elastic
|
||||
if [ -d "/vagrant" ]; then ln -s /vagrant beats; fi
|
||||
SCRIPT
|
||||
|
||||
Vagrant.configure(2) do |config|
|
||||
|
||||
# Windows Server 2012 R2
|
||||
config.vm.define "win2012", primary: true do |win2012|
|
||||
|
||||
win2012.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-win2012-r2-virtualbox-2016-10-28_1224.box"
|
||||
win2012.vm.guest = :windows
|
||||
|
||||
# Communicator for windows boxes
|
||||
win2012.vm.communicator = "winrm"
|
||||
|
||||
# Port forward WinRM and RDP
|
||||
win2012.vm.network :forwarded_port, guest: 22, host: 2222, id: "ssh", auto_correct: true
|
||||
win2012.vm.network :forwarded_port, guest: 3389, host: 33389, id: "rdp", auto_correct: true
|
||||
win2012.vm.network :forwarded_port, guest: 5985, host: 55985, id: "winrm", auto_correct: true
|
||||
|
||||
win2012.vm.provision "shell", inline: $winPsProvision
|
||||
end
|
||||
|
||||
# Solaris 11.2
|
||||
config.vm.define "solaris", primary: true do |solaris|
|
||||
solaris.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-solaris-11.2-virtualbox-2016-11-02_1603.box"
|
||||
solaris.vm.network :forwarded_port, guest: 22, host: 2223, id: "ssh", auto_correct: true
|
||||
|
||||
solaris.vm.provision "shell", inline: $unixProvision, privileged: false
|
||||
end
|
||||
|
||||
# FreeBSD 11.0
|
||||
config.vm.define "freebsd", primary: true do |freebsd|
|
||||
freebsd.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-freebsd-11.0-virtualbox-2016-11-02_1638.box"
|
||||
freebsd.vm.network :forwarded_port, guest: 22, host: 2224, id: "ssh", auto_correct: true
|
||||
|
||||
# Must use NFS to sync a folder on FreeBSD and this requires a host-only network.
|
||||
# To enable the /vagrant folder, set disabled to false and uncomment the private_network.
|
||||
config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", :nfs => true, disabled: true
|
||||
#config.vm.network "private_network", ip: "192.168.135.18"
|
||||
|
||||
freebsd.vm.provision "shell", inline: $unixProvision, privileged: false
|
||||
end
|
||||
|
||||
# OpenBSD 5.9-stable
|
||||
config.vm.define "openbsd", primary: true do |openbsd|
|
||||
openbsd.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-openbsd-5.9-current-virtualbox-2016-11-02_2007.box"
|
||||
openbsd.vm.network :forwarded_port, guest: 22, host: 2225, id: "ssh", auto_correct: true
|
||||
|
||||
config.vm.synced_folder ".", "/vagrant", type: "rsync", disabled: true
|
||||
config.vm.provider :virtualbox do |vbox|
|
||||
vbox.check_guest_additions = false
|
||||
vbox.functional_vboxsf = false
|
||||
end
|
||||
|
||||
openbsd.vm.provision "shell", inline: $unixProvision, privileged: false
|
||||
end
|
||||
|
||||
end
|
||||
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
1
vendor/github.com/elastic/beats/codecov.yml
generated
vendored
Normal file
1
vendor/github.com/elastic/beats/codecov.yml
generated
vendored
Normal file
@ -0,0 +1 @@
|
||||
comment: false
|
4
vendor/github.com/elastic/beats/dev-tools/.beatconfig
generated
vendored
Normal file
4
vendor/github.com/elastic/beats/dev-tools/.beatconfig
generated
vendored
Normal file
@ -0,0 +1,4 @@
|
||||
packetbeat-/packetbeat-
|
||||
filebeat-/filebeat-
|
||||
winlogonbeat-/winlogonbeat-
|
||||
logstash-/logstash-
|
51
vendor/github.com/elastic/beats/dev-tools/README.md
generated
vendored
Normal file
51
vendor/github.com/elastic/beats/dev-tools/README.md
generated
vendored
Normal file
@ -0,0 +1,51 @@
|
||||
Available scripts
|
||||
-----------------
|
||||
|
||||
|
||||
The following scripts are used by the unified release process:
|
||||
|
||||
| File | Description |
|
||||
|----------------------|-------------|
|
||||
| get_version | Returns the current version |
|
||||
| set_version | Sets the current version in all places where change is required. Doesn't commit changes. |
|
||||
| deploy | Builds all artifacts for the officially supported Beats |
|
||||
|
||||
|
||||
|
||||
Other scripts:
|
||||
|
||||
|
||||
| File | Description |
|
||||
|----------------------|-------------|
|
||||
| aggregate_coverage.py | Used to create coverage reports that contain both unit and system tests data |
|
||||
| merge_pr | Used to make it easier to open a PR that merges one branch into another. |
|
||||
|
||||
|
||||
Import / export the dashboards of a single Beat:
|
||||
|
||||
| File | Description |
|
||||
|-----------------------|-------------|
|
||||
| import_dashboards.sh | Bash script to import the Beat dashboards from a local directory in Elasticsearch |
|
||||
| import_dashboards.ps1 | Powershell script to import the Beat dashboards from a local directory in Elasticsearch |
|
||||
| export_dashboards.py | Python script to export the Beat dashboards from Elasticsearch to a local directory|
|
||||
|
||||
Running export_dashboards.py in environment
|
||||
----------------------------------------------
|
||||
|
||||
If you are running the python script for the first time, you need to create the
|
||||
environment by running the following commands in the `beats/dev-tools`
|
||||
directory:
|
||||
|
||||
```
|
||||
virtualenv env
|
||||
. env/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
This creates the environment that contains all the python packages required to
|
||||
run the `export_dashboards.py` script. Thus, for the next runs you just need
|
||||
to enable the environment:
|
||||
|
||||
```
|
||||
. env/bin/activate
|
||||
```
|
50
vendor/github.com/elastic/beats/dev-tools/aggregate_coverage.py
generated
vendored
Normal file
50
vendor/github.com/elastic/beats/dev-tools/aggregate_coverage.py
generated
vendored
Normal file
@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
"""Simple script to concatenate coverage reports.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import fnmatch
|
||||
|
||||
|
||||
def main(arguments):
|
||||
|
||||
parser = argparse.ArgumentParser(description=__doc__,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter)
|
||||
parser.add_argument('dir', help="Input dir to search recursively for .cov files")
|
||||
parser.add_argument('-o', '--outfile', help="Output file",
|
||||
default=sys.stdout, type=argparse.FileType('w'))
|
||||
|
||||
args = parser.parse_args(arguments)
|
||||
|
||||
# Recursively find all matching .cov files.
|
||||
matches = []
|
||||
for root, dirnames, filenames in os.walk(args.dir):
|
||||
for filename in fnmatch.filter(filenames, '*.cov'):
|
||||
matches.append(os.path.join(root, filename))
|
||||
|
||||
# Write to output.
|
||||
lines = {}
|
||||
args.outfile.write('mode: atomic\n')
|
||||
for m in matches:
|
||||
if os.path.abspath(args.outfile.name) != os.path.abspath(m):
|
||||
with open(m) as f:
|
||||
for line in f:
|
||||
if not line.startswith('mode:') and "vendor" not in line:
|
||||
(position, stmt, count) = line.split(" ")
|
||||
stmt = int(stmt)
|
||||
count = int(count)
|
||||
prev_count = 0
|
||||
if lines.has_key(position):
|
||||
(_, prev_stmt, prev_count) = lines[position]
|
||||
assert prev_stmt == stmt
|
||||
lines[position] = (position, stmt, prev_count + count)
|
||||
|
||||
for line in sorted(["%s %d %d\n" % lines[key] for key in lines.keys()]):
|
||||
args.outfile.write(line)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main(sys.argv[1:]))
|
99
vendor/github.com/elastic/beats/dev-tools/cherrypick_pr
generated
vendored
Executable file
99
vendor/github.com/elastic/beats/dev-tools/cherrypick_pr
generated
vendored
Executable file
@ -0,0 +1,99 @@
|
||||
#!/usr/bin/env python
|
||||
import sys
|
||||
import argparse
|
||||
from subprocess import check_call, call, check_output
|
||||
|
||||
"""
|
||||
Example usage:
|
||||
|
||||
./dev-tools/cherrypick_pr 5.0 2565 6490604aa0cf7fa61932a90700e6ca988fc8a527
|
||||
|
||||
In case of backporting errors, fix them, then run:
|
||||
|
||||
git cherry-pick --continue
|
||||
./dev-tools/cherrypick_pr 5.0 2565 6490604aa0cf7fa61932a90700e6ca988fc8a527 --continue
|
||||
|
||||
This script does the following:
|
||||
|
||||
* cleanups both from_branch and to_branch (warning: drops local changes)
|
||||
* creates a temporary branch named something like "branch_2565"
|
||||
* calls the git cherry-pick command in this branch
|
||||
* after fixing the merge errors (if needed), pushes the branch to your
|
||||
remote
|
||||
|
||||
You then just need to go to Github and open the PR.
|
||||
|
||||
Note that you need to take the commit hashes from `git log` on the
|
||||
from_branch, copying the IDs from Github doesn't work in case we squashed the
|
||||
PR.
|
||||
"""
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Creates a PR for merging two branches")
|
||||
parser.add_argument("to_branch",
|
||||
help="To branch (e.g 5.0)")
|
||||
parser.add_argument("pr_number",
|
||||
help="The PR number being merged (e.g. 2345)")
|
||||
parser.add_argument("commit_hashes", metavar="hash", nargs="+",
|
||||
help="The commit hashes to cherry pick." +
|
||||
" You can specify multiple.")
|
||||
parser.add_argument("--yes", action="store_true",
|
||||
help="Assume yes. Warning: discards local changes.")
|
||||
parser.add_argument("--continue", action="store_true",
|
||||
help="Continue after fixing merging errors.")
|
||||
parser.add_argument("--from_branch", default="master",
|
||||
help="From branch")
|
||||
args = parser.parse_args()
|
||||
|
||||
print args
|
||||
|
||||
tmp_branch = "backport_{}_{}".format(args.pr_number, args.to_branch)
|
||||
|
||||
if not vars(args)["continue"]:
|
||||
if not args.yes and raw_input("This will destroy all local changes. " +
|
||||
"Continue? [y/n]: ") != "y":
|
||||
return 1
|
||||
check_call("git reset --hard", shell=True)
|
||||
check_call("git clean -df", shell=True)
|
||||
check_call("git fetch", shell=True)
|
||||
|
||||
check_call("git checkout {}".format(args.from_branch), shell=True)
|
||||
check_call("git pull", shell=True)
|
||||
|
||||
check_call("git checkout {}".format(args.to_branch), shell=True)
|
||||
check_call("git pull", shell=True)
|
||||
|
||||
call("git branch -D {} > /dev/null".format(tmp_branch), shell=True)
|
||||
check_call("git checkout -b {}".format(tmp_branch), shell=True)
|
||||
if call("git cherry-pick -x {}".format(" ".join(args.commit_hashes)),
|
||||
shell=True) != 0:
|
||||
print("Looks like you have cherry-pick errors.")
|
||||
print("Fix them, then run: ")
|
||||
print(" git cherry-pick --continue")
|
||||
print(" {} --continue".format(" ".join(sys.argv)))
|
||||
return 1
|
||||
|
||||
if len(check_output("git status -s", shell=True).strip()) > 0:
|
||||
print("Looks like you have uncommitted changes." +
|
||||
" Please execute first: git cherry-pick --continue")
|
||||
return 1
|
||||
|
||||
if len(check_output("git log HEAD...{}".format(args.to_branch),
|
||||
shell=True).strip()) == 0:
|
||||
print("No commit to push")
|
||||
return 1
|
||||
|
||||
print("Ready to push branch.")
|
||||
remote = raw_input("To which remote should I push? (your fork): ")
|
||||
call("git push {} :{} > /dev/null".format(remote, tmp_branch),
|
||||
shell=True)
|
||||
check_call("git push --set-upstream {} {}"
|
||||
.format(remote, tmp_branch), shell=True)
|
||||
print("Done. Open PR by following this URL: \n\t" +
|
||||
"https://github.com/elastic/beats/compare/{}...{}:{}?expand=1"
|
||||
.format(args.to_branch, remote, tmp_branch))
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
80
vendor/github.com/elastic/beats/dev-tools/common.bash
generated
vendored
Normal file
80
vendor/github.com/elastic/beats/dev-tools/common.bash
generated
vendored
Normal file
@ -0,0 +1,80 @@
|
||||
#
|
||||
# File: common.bash
|
||||
#
|
||||
# Common bash routines.
|
||||
#
|
||||
|
||||
# Script directory:
|
||||
_sdir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
|
||||
# debug "msg"
|
||||
# Write a debug message to stderr.
|
||||
debug()
|
||||
{
|
||||
if [ "$VERBOSE" == "true" ]; then
|
||||
echo "DEBUG: $1" >&2
|
||||
fi
|
||||
}
|
||||
|
||||
# err "msg"
|
||||
# Write and error message to stderr.
|
||||
err()
|
||||
{
|
||||
echo "ERROR: $1" >&2
|
||||
}
|
||||
|
||||
# get_go_version
|
||||
# Read the project's Go version and return it in the GO_VERSION variable.
|
||||
# On failure it will exit.
|
||||
get_go_version() {
|
||||
GO_VERSION=$(awk '/^:go-version:/{print $NF}' "${_sdir}/../libbeat/docs/version.asciidoc")
|
||||
if [ -z "$GO_VERSION" ]; then
|
||||
err "Failed to detect the project's Go version"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# install_gimme
|
||||
# Install gimme to HOME/bin.
|
||||
install_gimme() {
|
||||
# Install gimme
|
||||
if [ ! -f "${HOME}/bin/gimme" ]; then
|
||||
mkdir -p ${HOME}/bin
|
||||
curl -sL -o ${HOME}/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/v1.1.0/gimme
|
||||
chmod +x ${HOME}/bin/gimme
|
||||
fi
|
||||
|
||||
GIMME="${HOME}/bin/gimme"
|
||||
debug "Gimme version $(${GIMME} version)"
|
||||
}
|
||||
|
||||
# setup_go_root "version"
|
||||
# This configures the Go version being used. It sets GOROOT and adds
|
||||
# GOROOT/bin to the PATH. It uses gimme to download the Go version if
|
||||
# it does not already exist in the ~/.gimme dir.
|
||||
setup_go_root() {
|
||||
local version=${1}
|
||||
|
||||
install_gimme
|
||||
|
||||
# Setup GOROOT and add go to the PATH.
|
||||
${GIMME} "${version}" > /dev/null
|
||||
source "${HOME}/.gimme/envs/go${version}.env" 2> /dev/null
|
||||
|
||||
debug "$(go version)"
|
||||
}
|
||||
|
||||
# setup_go_path "gopath"
|
||||
# This sets GOPATH and adds GOPATH/bin to the PATH.
|
||||
setup_go_path() {
|
||||
local gopath="${1}"
|
||||
if [ -z "$gopath" ]; then return; fi
|
||||
|
||||
# Setup GOPATH.
|
||||
export GOPATH="${gopath}"
|
||||
|
||||
# Add GOPATH to PATH.
|
||||
export PATH="${GOPATH}/bin:${PATH}"
|
||||
|
||||
debug "GOPATH=${GOPATH}"
|
||||
}
|
25
vendor/github.com/elastic/beats/dev-tools/deploy
generated
vendored
Executable file
25
vendor/github.com/elastic/beats/dev-tools/deploy
generated
vendored
Executable file
@ -0,0 +1,25 @@
|
||||
#!/usr/bin/env python
|
||||
import os
|
||||
import argparse
|
||||
from subprocess import check_call
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Builds all the Beats artifacts")
|
||||
parser.add_argument("--no-snapshot", action="store_true",
|
||||
help="Don't append -SNAPSHOT to the version.")
|
||||
args = parser.parse_args()
|
||||
|
||||
dir = os.path.dirname(os.path.realpath(__file__))
|
||||
os.chdir(dir + "/../")
|
||||
print("Getting dependencies")
|
||||
check_call("make clean", shell=True)
|
||||
print("Done building Docker images.")
|
||||
if args.no_snapshot:
|
||||
check_call("make SNAPSHOT=no package", shell=True)
|
||||
else:
|
||||
check_call("make SNAPSHOT=yes package", shell=True)
|
||||
print("All done")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
116
vendor/github.com/elastic/beats/dev-tools/export_dashboards.py
generated
vendored
Normal file
116
vendor/github.com/elastic/beats/dev-tools/export_dashboards.py
generated
vendored
Normal file
@ -0,0 +1,116 @@
|
||||
from elasticsearch import Elasticsearch
|
||||
import argparse
|
||||
import os
|
||||
import json
|
||||
import re
|
||||
|
||||
|
||||
def ExportDashboards(es, regex, kibana_index, output_directory):
|
||||
res = es.search(
|
||||
index=kibana_index,
|
||||
doc_type="dashboard",
|
||||
size=1000)
|
||||
|
||||
try:
|
||||
reg_exp = re.compile(regex, re.IGNORECASE)
|
||||
except:
|
||||
print("Wrong regex {}".format(regex))
|
||||
return
|
||||
|
||||
for doc in res['hits']['hits']:
|
||||
|
||||
if not reg_exp.match(doc["_source"]["title"]):
|
||||
print("Ignore dashboard", doc["_source"]["title"])
|
||||
continue
|
||||
|
||||
# save dashboard
|
||||
SaveJson("dashboard", doc, output_directory)
|
||||
|
||||
# save dependencies
|
||||
panels = json.loads(doc['_source']['panelsJSON'])
|
||||
for panel in panels:
|
||||
if panel["type"] == "visualization":
|
||||
ExportVisualization(
|
||||
es,
|
||||
panel["id"],
|
||||
kibana_index,
|
||||
output_directory)
|
||||
elif panel["type"] == "search":
|
||||
ExportSearch(
|
||||
es,
|
||||
panel["id"],
|
||||
kibana_index,
|
||||
output_directory)
|
||||
else:
|
||||
print("Unknown type {} in dashboard".format(panel["type"]))
|
||||
|
||||
|
||||
def ExportVisualization(es, visualization, kibana_index, output_directory):
|
||||
doc = es.get(
|
||||
index=kibana_index,
|
||||
doc_type="visualization",
|
||||
id=visualization)
|
||||
|
||||
# save visualization
|
||||
SaveJson("visualization", doc, output_directory)
|
||||
|
||||
# save dependencies
|
||||
if "savedSearchId" in doc["_source"]:
|
||||
search = doc["_source"]['savedSearchId']
|
||||
ExportSearch(
|
||||
es,
|
||||
search,
|
||||
kibana_index,
|
||||
output_directory)
|
||||
|
||||
|
||||
def ExportSearch(es, search, kibana_index, output_directory):
|
||||
doc = es.get(
|
||||
index=kibana_index,
|
||||
doc_type="search",
|
||||
id=search)
|
||||
|
||||
# save search
|
||||
SaveJson("search", doc, output_directory)
|
||||
|
||||
|
||||
def SaveJson(doc_type, doc, output_directory):
|
||||
|
||||
dir = os.path.join(output_directory, doc_type)
|
||||
if not os.path.exists(dir):
|
||||
os.makedirs(dir)
|
||||
# replace unsupported characters
|
||||
filepath = os.path.join(dir, re.sub(r'[\>\<:"/\\\|\?\*]', '', doc['_id']) + '.json')
|
||||
with open(filepath, 'w') as f:
|
||||
json.dump(doc['_source'], f, indent=2)
|
||||
print("Written {}".format(filepath))
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Export the Kibana dashboards together with"
|
||||
" all used visualizations, searches and index pattern")
|
||||
parser.add_argument("--url",
|
||||
help="Elasticsearch URL. By default: http://localhost:9200",
|
||||
default="http://localhost:9200")
|
||||
parser.add_argument("--regex",
|
||||
help="Regular expression to match all the dashboards to be exported. For example: metricbeat*",
|
||||
required=True)
|
||||
parser.add_argument("--kibana",
|
||||
help="Elasticsearch index where to store the Kibana settings. By default: .kibana ",
|
||||
default=".kibana")
|
||||
parser.add_argument("--dir", help="Output directory. By default: output",
|
||||
default="output")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
print("Export {} dashboards to {} directory".format(args.regex, args.dir))
|
||||
print("Elasticsearch URL: {}".format(args.url))
|
||||
print("Elasticsearch index to store Kibana's"
|
||||
" dashboards: {}".format(args.kibana))
|
||||
|
||||
es = Elasticsearch(args.url)
|
||||
ExportDashboards(es, args.regex, args.kibana, args.dir)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
97
vendor/github.com/elastic/beats/dev-tools/generate_notice.py
generated
vendored
Normal file
97
vendor/github.com/elastic/beats/dev-tools/generate_notice.py
generated
vendored
Normal file
@ -0,0 +1,97 @@
|
||||
import glob
|
||||
import sys
|
||||
import os
|
||||
import datetime
|
||||
import argparse
|
||||
|
||||
|
||||
def read_file(filename):
|
||||
|
||||
if not os.path.isfile(filename):
|
||||
print("File not found {}".format(filename))
|
||||
return ""
|
||||
|
||||
with open(filename, 'r') as f:
|
||||
file_content = f.read()
|
||||
return file_content
|
||||
|
||||
|
||||
def get_library_name(license):
|
||||
|
||||
lib = ""
|
||||
path = os.path.dirname(license)
|
||||
# get the last three directories
|
||||
for i in range(0, 3):
|
||||
path, x = os.path.split(path)
|
||||
if len(lib) == 0:
|
||||
lib = x
|
||||
elif len(x) > 0:
|
||||
lib = x + "/" + lib
|
||||
|
||||
return lib
|
||||
|
||||
|
||||
def add_licenses(f, licenses):
|
||||
|
||||
for license in licenses:
|
||||
for license_file in glob.glob(license):
|
||||
f.write("\n--------------------------------------------------------------------\n")
|
||||
f.write("{}\n".format(get_library_name(license_file)))
|
||||
f.write("--------------------------------------------------------------------\n")
|
||||
copyright = read_file(license_file)
|
||||
if "Apache License" not in copyright:
|
||||
f.write(copyright)
|
||||
else:
|
||||
# it's an Apache License, so include only the NOTICE file
|
||||
f.write("Apache License\n\n")
|
||||
for notice_file in glob.glob(os.path.join(os.path.dirname(license_file), "NOTICE*")):
|
||||
f.write("-------{}-----\n".format(os.path.basename(notice_file)))
|
||||
f.write(read_file(notice_file))
|
||||
|
||||
|
||||
def create_notice(filename, beat, copyright, licenses):
|
||||
|
||||
now = datetime.datetime.now()
|
||||
|
||||
with open(filename, "w+") as f:
|
||||
|
||||
# Add header
|
||||
f.write("{}\n".format(beat))
|
||||
f.write("Copyright 2014-{0} {1}\n".format(now.year, copyright))
|
||||
f.write("\n")
|
||||
f.write("This product includes software developed by The Apache Software \nFoundation (http://www.apache.org/).\n\n")
|
||||
|
||||
# Add licenses for 3rd party libraries
|
||||
f.write("==========================================================================\n")
|
||||
f.write("Third party libraries used by the Beats project:\n")
|
||||
f.write("==========================================================================\n\n")
|
||||
add_licenses(f, licenses)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate the NOTICE file from all vendor directories available in a given directory")
|
||||
parser.add_argument("vendor",
|
||||
help="directory where to search for vendor directories")
|
||||
parser.add_argument("-b", "--beat", default="Elastic Beats",
|
||||
help="Beat name")
|
||||
parser.add_argument("-c", "--copyright", default="Elasticsearch BV",
|
||||
help="copyright owner")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
cwd = os.getcwd()
|
||||
notice = os.path.join(cwd, "NOTICE")
|
||||
licenses = []
|
||||
|
||||
for root, dirs, files in os.walk(args.vendor):
|
||||
if 'vendor' in dirs:
|
||||
license = os.path.join(os.path.join(root, 'vendor'),
|
||||
'**/**/**/LICENSE*')
|
||||
licenses.append(license)
|
||||
|
||||
print("Get the licenses available from {}".format(licenses))
|
||||
create_notice(notice, args.beat, args.copyright, licenses)
|
||||
|
||||
print("Available at {}\n".format(notice))
|
37
vendor/github.com/elastic/beats/dev-tools/get_version
generated
vendored
Executable file
37
vendor/github.com/elastic/beats/dev-tools/get_version
generated
vendored
Executable file
@ -0,0 +1,37 @@
|
||||
#!/usr/bin/env python
|
||||
import os
|
||||
import re
|
||||
import argparse
|
||||
|
||||
pattern = re.compile(r'(const\s|)\w*(v|V)ersion\s=\s"(?P<version>.*)"')
|
||||
vendored_libbeat = os.path.normpath("vendor/github.com/elastic/beats")
|
||||
|
||||
|
||||
def get_filepath(filename):
|
||||
script_directory = os.path.abspath(os.path.dirname(os.path.realpath(__file__)))
|
||||
index = script_directory.find(vendored_libbeat)
|
||||
if index > 0:
|
||||
# Community beat detected
|
||||
filename = os.path.join(script_directory[:index], filename)
|
||||
if os.path.exists(filename):
|
||||
return filename # Community beat version exists
|
||||
return os.path.abspath(os.path.join(script_directory, os.pardir, "libbeat","beat","version.go"))
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Prints the current version at stdout.")
|
||||
args = parser.parse_args()
|
||||
|
||||
goversion_filepath = get_filepath("version.go")
|
||||
|
||||
with open(goversion_filepath, "r") as f:
|
||||
for line in f:
|
||||
match = pattern.match(line)
|
||||
if match:
|
||||
print(match.group('version'))
|
||||
return
|
||||
print ("No version found in file {}".format(goversion_filepath))
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
18
vendor/github.com/elastic/beats/dev-tools/glide.yaml
generated
vendored
Normal file
18
vendor/github.com/elastic/beats/dev-tools/glide.yaml
generated
vendored
Normal file
@ -0,0 +1,18 @@
|
||||
package: github.com/elastic/beats/dev-tools
|
||||
import: []
|
||||
testImports:
|
||||
- name: github.com/blakesmith/ar
|
||||
version: 8bd4349a67f2533b078dbc524689d15dba0f4659
|
||||
- name: github.com/cavaliercoder/go-rpm
|
||||
version: 9664735b838ea0a81e4aace3197ebe0d4040f952
|
||||
- name: golang.org/x/crypto
|
||||
version: 2f8be38b9a7533b8763d48273737ff6e90428a96
|
||||
subpackages:
|
||||
- cast5
|
||||
- openpgp
|
||||
- openpgp/armor
|
||||
- openpgp/elgamal
|
||||
- openpgp/errors
|
||||
- openpgp/packet
|
||||
- openpgp/s2k
|
||||
|
145
vendor/github.com/elastic/beats/dev-tools/jenkins_ci
generated
vendored
Executable file
145
vendor/github.com/elastic/beats/dev-tools/jenkins_ci
generated
vendored
Executable file
@ -0,0 +1,145 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
# Script directory:
|
||||
SDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
SNAME="$(basename "$0")"
|
||||
|
||||
source "${SDIR}/common.bash"
|
||||
|
||||
usage() {
|
||||
cat << EOF
|
||||
Usage: $SNAME [-d] [-h] [-v] [-r] [-w=WORKSPACE] (-g|-b|-c)
|
||||
|
||||
Description: Executes a build using the project's Go version.
|
||||
|
||||
Options:
|
||||
-w=WORKSPACE Required. Specifies the path to the Jenkins workspace.
|
||||
If not set then the WORKSPACE environment variable is
|
||||
used. The workspace will be treated as the GOPATH.
|
||||
-b | --build Perform a build which includes make targets: check,
|
||||
testsuite, coverage-report, and docs.
|
||||
-c | --cleanup Clean up after the build by removing the checkout of
|
||||
elastic/docs and stopping any running containers
|
||||
started by the build. This cannot be specified with
|
||||
--build.
|
||||
-g | --go-version Optional. Write the project's Go version to stdout
|
||||
and then exits. Can be used to setup Go with
|
||||
eval "\$(gimme \$(./jenkins_ci -g))".
|
||||
-i | --install-gimme Optional. Installs gimme to HOME/bin.
|
||||
-r | --race Optional. Enable the Go race detector for tests that
|
||||
are run.
|
||||
-d | --debug Optional. Runs the script with 'set -x' to log a trace
|
||||
of all commands and their arguments being executed.
|
||||
-v | --verbose Optional. Enable verbose logging from this script to stderr.
|
||||
-h | --help Optional. Print this usage information.
|
||||
|
||||
Examples:
|
||||
Print project Go version: ./$SNAME --go-version
|
||||
Build with race detector: ./$SNAME -b -r
|
||||
Stop test environment: ./$SNAME -c
|
||||
|
||||
Jenkins Setup:
|
||||
|
||||
1) Jenkins should be setup to checkout elastic/beats into
|
||||
\$WORKSPACE/src/github.com/elastic/
|
||||
2) The single build script should be added that executes
|
||||
\$WORKSPACE/src/github.com/elastic/beats/dev-tools/$SNAME -d -v -b --race
|
||||
3) A post build action should be added that executes
|
||||
\$WORKSPACE/src/github.com/elastic/beats/dev-tools/$SNAME -d -v -c
|
||||
EOF
|
||||
}
|
||||
|
||||
# Parse command line arguments.
|
||||
parse_args() {
|
||||
for i in "$@"
|
||||
do
|
||||
case $i in
|
||||
-b|--build)
|
||||
BUILD=true
|
||||
shift
|
||||
;;
|
||||
-c|--cleanup)
|
||||
CLEANUP=true
|
||||
shift
|
||||
;;
|
||||
-d|--debug)
|
||||
set -x
|
||||
shift
|
||||
;;
|
||||
-g|--go-version)
|
||||
get_go_version
|
||||
echo "${GO_VERSION}"
|
||||
exit 0
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
-i|--install-gimme)
|
||||
install_gimme
|
||||
exit 0
|
||||
;;
|
||||
-r|--race)
|
||||
export RACE_DETECTOR=1
|
||||
shift
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
-w=*|--workspace=*)
|
||||
WORKSPACE="${i#*=}"
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Invalid argument: $i"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -z "$WORKSPACE" ]; then
|
||||
err "WORKSPACE env var must be set or --workspace must be specified"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
build() {
|
||||
make check
|
||||
make testsuite
|
||||
make coverage-report
|
||||
make docs
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
# Remove the checkout of elastic/docs if it exists.
|
||||
rm -rf "${SDIR}/../build/docs"
|
||||
|
||||
make stop-environments
|
||||
}
|
||||
|
||||
main() {
|
||||
cd "${SDIR}/.."
|
||||
parse_args $*
|
||||
get_go_version
|
||||
setup_go_root ${GO_VERSION}
|
||||
setup_go_path ${WORKSPACE}
|
||||
|
||||
if [ "$BUILD" == "true" ] && [ "$CLEANUP" == "true" ]; then
|
||||
err "--build and --cleanup cannot be used together"
|
||||
exit 1
|
||||
elif [ "$BUILD" == "true" ]; then
|
||||
chmod -R go-w "${GOPATH}/src/github.com/elastic/beats"
|
||||
build
|
||||
elif [ "$CLEANUP" == "true" ]; then
|
||||
cleanup
|
||||
else
|
||||
err "Use either --build or --cleanup"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
umask 022
|
||||
main $*
|
62
vendor/github.com/elastic/beats/dev-tools/merge_pr
generated
vendored
Executable file
62
vendor/github.com/elastic/beats/dev-tools/merge_pr
generated
vendored
Executable file
@ -0,0 +1,62 @@
|
||||
#!/usr/bin/env python
|
||||
import sys
|
||||
import argparse
|
||||
from subprocess import check_call, call, check_output
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Creates a PR for merging two branches")
|
||||
parser.add_argument("from_branch",
|
||||
help="From branch (e.g 1.1)")
|
||||
parser.add_argument("to_branch",
|
||||
help="To branch (e.g master)")
|
||||
parser.add_argument("--yes", action="store_true",
|
||||
help="Assume yes. Warning: discards local changes.")
|
||||
parser.add_argument("--continue", action="store_true",
|
||||
help="Continue after fixing merging errors.")
|
||||
args = parser.parse_args()
|
||||
|
||||
tmp_branch = "automatic_merge_from_{}_to_{}_branch".format(
|
||||
args.from_branch, args.to_branch)
|
||||
|
||||
if not vars(args)["continue"]:
|
||||
if not args.yes and raw_input("This will destroy all local changes. " +
|
||||
"Continue? [y/n]: ") != "y":
|
||||
return 1
|
||||
check_call("git reset --hard", shell=True)
|
||||
check_call("git clean -dfx", shell=True)
|
||||
check_call("git fetch", shell=True)
|
||||
|
||||
check_call("git checkout {}".format(args.from_branch), shell=True)
|
||||
check_call("git pull", shell=True)
|
||||
|
||||
check_call("git checkout {}".format(args.to_branch), shell=True)
|
||||
check_call("git pull", shell=True)
|
||||
call("git branch -D {} > /dev/null".format(tmp_branch), shell=True)
|
||||
check_call("git checkout -b {}".format(tmp_branch), shell=True)
|
||||
if call("git merge {}".format(args.from_branch), shell=True) != 0:
|
||||
print("Looks like you have merge errors.")
|
||||
print("Fix them, commit, then run: {} --continue"
|
||||
.format(" ".join(sys.argv)))
|
||||
return 1
|
||||
|
||||
if len(check_output("git status -s", shell=True).strip()) > 0:
|
||||
print("Looks like you have uncommitted changes")
|
||||
return 1
|
||||
|
||||
if len(check_output("git log HEAD...{}".format(args.to_branch),
|
||||
shell=True).strip()) == 0:
|
||||
print("No commit to push")
|
||||
return 1
|
||||
|
||||
print("Ready to push branch.")
|
||||
remote = raw_input("To which remote should I push? (your fork): ")
|
||||
call("git push {} :{} > /dev/null".format(remote, tmp_branch),
|
||||
shell=True)
|
||||
check_call("git push --set-upstream {} {}"
|
||||
.format(remote, tmp_branch), shell=True)
|
||||
print("Done. Go to Github and open the PR")
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
329
vendor/github.com/elastic/beats/dev-tools/package_test.go
generated
vendored
Normal file
329
vendor/github.com/elastic/beats/dev-tools/package_test.go
generated
vendored
Normal file
@ -0,0 +1,329 @@
|
||||
package dev_tools
|
||||
|
||||
// This file contains tests that can be run on the generated packages.
|
||||
// To run these tests use `go test package_test.go`.
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"archive/zip"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"flag"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/blakesmith/ar"
|
||||
"github.com/cavaliercoder/go-rpm"
|
||||
)
|
||||
|
||||
const (
|
||||
expectedConfigMode = os.FileMode(0600)
|
||||
expectedManifestMode = os.FileMode(0644)
|
||||
expectedConfigUID = 0
|
||||
expectedConfigGID = 0
|
||||
)
|
||||
|
||||
var (
|
||||
configFilePattern = regexp.MustCompile(`.*beat\.yml`)
|
||||
manifestFilePattern = regexp.MustCompile(`manifest.yml`)
|
||||
)
|
||||
|
||||
var (
|
||||
files = flag.String("files", "../build/upload/*/*", "filepath glob containing package files")
|
||||
)
|
||||
|
||||
func TestRPM(t *testing.T) {
|
||||
rpms := getFiles(t, regexp.MustCompile(`\.rpm$`))
|
||||
for _, rpm := range rpms {
|
||||
checkRPM(t, rpm)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeb(t *testing.T) {
|
||||
debs := getFiles(t, regexp.MustCompile(`\.deb$`))
|
||||
buf := new(bytes.Buffer)
|
||||
for _, deb := range debs {
|
||||
checkDeb(t, deb, buf)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTar(t *testing.T) {
|
||||
tars := getFiles(t, regexp.MustCompile(`\.tar\.gz$`))
|
||||
for _, tar := range tars {
|
||||
checkTar(t, tar)
|
||||
}
|
||||
}
|
||||
|
||||
func TestZip(t *testing.T) {
|
||||
zips := getFiles(t, regexp.MustCompile(`^\w+beat-\S+.zip$`))
|
||||
for _, zip := range zips {
|
||||
checkZip(t, zip)
|
||||
}
|
||||
}
|
||||
|
||||
// Sub-tests
|
||||
|
||||
func checkRPM(t *testing.T, file string) {
|
||||
p, err := readRPM(file)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
checkConfigPermissions(t, p)
|
||||
checkConfigOwner(t, p)
|
||||
checkManifestPermissions(t, p)
|
||||
checkManifestOwner(t, p)
|
||||
}
|
||||
|
||||
func checkDeb(t *testing.T, file string, buf *bytes.Buffer) {
|
||||
p, err := readDeb(file, buf)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
checkConfigPermissions(t, p)
|
||||
checkConfigOwner(t, p)
|
||||
checkManifestPermissions(t, p)
|
||||
checkManifestOwner(t, p)
|
||||
}
|
||||
|
||||
func checkTar(t *testing.T, file string) {
|
||||
p, err := readTar(file)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
checkConfigPermissions(t, p)
|
||||
checkConfigOwner(t, p)
|
||||
checkManifestPermissions(t, p)
|
||||
}
|
||||
|
||||
func checkZip(t *testing.T, file string) {
|
||||
p, err := readZip(file)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
checkConfigPermissions(t, p)
|
||||
checkManifestPermissions(t, p)
|
||||
}
|
||||
|
||||
// Verify that the main configuration file is installed with a 0600 file mode.
|
||||
func checkConfigPermissions(t *testing.T, p *packageFile) {
|
||||
t.Run(p.Name+" config file permissions", func(t *testing.T) {
|
||||
for _, entry := range p.Contents {
|
||||
if configFilePattern.MatchString(entry.File) {
|
||||
mode := entry.Mode.Perm()
|
||||
if expectedConfigMode != mode {
|
||||
t.Errorf("file %v has wrong permissions: expected=%v actual=%v",
|
||||
entry.File, expectedConfigMode, mode)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
t.Errorf("no config file found matching %v", configFilePattern)
|
||||
})
|
||||
}
|
||||
|
||||
func checkConfigOwner(t *testing.T, p *packageFile) {
|
||||
t.Run(p.Name+" config file owner", func(t *testing.T) {
|
||||
for _, entry := range p.Contents {
|
||||
if configFilePattern.MatchString(entry.File) {
|
||||
if expectedConfigUID != entry.UID {
|
||||
t.Errorf("file %v should be owned by user %v, owner=%v", entry.File, expectedConfigGID, entry.UID)
|
||||
}
|
||||
if expectedConfigGID != entry.GID {
|
||||
t.Errorf("file %v should be owned by group %v, group=%v", entry.File, expectedConfigGID, entry.GID)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
t.Errorf("no config file found matching %v", configFilePattern)
|
||||
})
|
||||
}
|
||||
|
||||
// Verify that the modules manifest.yml files are installed with a 0644 file mode.
|
||||
func checkManifestPermissions(t *testing.T, p *packageFile) {
|
||||
t.Run(p.Name+" manifest file permissions", func(t *testing.T) {
|
||||
for _, entry := range p.Contents {
|
||||
if manifestFilePattern.MatchString(entry.File) {
|
||||
mode := entry.Mode.Perm()
|
||||
if expectedManifestMode != mode {
|
||||
t.Errorf("file %v has wrong permissions: expected=%v actual=%v",
|
||||
entry.File, expectedManifestMode, mode)
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Verify that the manifest owner is root
|
||||
func checkManifestOwner(t *testing.T, p *packageFile) {
|
||||
t.Run(p.Name+" manifest file owner", func(t *testing.T) {
|
||||
for _, entry := range p.Contents {
|
||||
if manifestFilePattern.MatchString(entry.File) {
|
||||
if expectedConfigUID != entry.UID {
|
||||
t.Errorf("file %v should be owned by user %v, owner=%v", entry.File, expectedConfigGID, entry.UID)
|
||||
}
|
||||
if expectedConfigGID != entry.GID {
|
||||
t.Errorf("file %v should be owned by group %v, group=%v", entry.File, expectedConfigGID, entry.GID)
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Helpers
|
||||
|
||||
type packageFile struct {
|
||||
Name string
|
||||
Contents map[string]packageEntry
|
||||
}
|
||||
|
||||
type packageEntry struct {
|
||||
File string
|
||||
UID int
|
||||
GID int
|
||||
Mode os.FileMode
|
||||
}
|
||||
|
||||
func getFiles(t *testing.T, pattern *regexp.Regexp) []string {
|
||||
matches, err := filepath.Glob(*files)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
files := matches[:0]
|
||||
for _, f := range matches {
|
||||
if pattern.MatchString(filepath.Base(f)) {
|
||||
files = append(files, f)
|
||||
}
|
||||
}
|
||||
|
||||
return files
|
||||
}
|
||||
|
||||
func readRPM(rpmFile string) (*packageFile, error) {
|
||||
p, err := rpm.OpenPackageFile(rpmFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
contents := p.Files()
|
||||
pf := &packageFile{Name: filepath.Base(rpmFile), Contents: map[string]packageEntry{}}
|
||||
|
||||
for _, file := range contents {
|
||||
pf.Contents[file.Name()] = packageEntry{
|
||||
File: file.Name(),
|
||||
Mode: file.Mode(),
|
||||
}
|
||||
}
|
||||
|
||||
return pf, nil
|
||||
}
|
||||
|
||||
// readDeb reads the data.tar.gz file from the .deb.
|
||||
func readDeb(debFile string, dataBuffer *bytes.Buffer) (*packageFile, error) {
|
||||
file, err := os.Open(debFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
arReader := ar.NewReader(file)
|
||||
for {
|
||||
header, err := arReader.Next()
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if strings.HasPrefix(header.Name, "data.tar.gz") {
|
||||
dataBuffer.Reset()
|
||||
_, err := io.Copy(dataBuffer, arReader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
gz, err := gzip.NewReader(dataBuffer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer gz.Close()
|
||||
|
||||
return readTarContents(filepath.Base(debFile), gz)
|
||||
}
|
||||
}
|
||||
|
||||
return nil, io.EOF
|
||||
}
|
||||
|
||||
func readTar(tarFile string) (*packageFile, error) {
|
||||
file, err := os.Open(tarFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
var fileReader io.ReadCloser = file
|
||||
if strings.HasSuffix(tarFile, ".gz") {
|
||||
if fileReader, err = gzip.NewReader(file); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer fileReader.Close()
|
||||
}
|
||||
|
||||
return readTarContents(filepath.Base(tarFile), fileReader)
|
||||
}
|
||||
|
||||
func readTarContents(tarName string, data io.Reader) (*packageFile, error) {
|
||||
tarReader := tar.NewReader(data)
|
||||
|
||||
p := &packageFile{Name: tarName, Contents: map[string]packageEntry{}}
|
||||
for {
|
||||
header, err := tarReader.Next()
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
p.Contents[header.Name] = packageEntry{
|
||||
File: header.Name,
|
||||
UID: header.Uid,
|
||||
GID: header.Gid,
|
||||
Mode: os.FileMode(header.Mode),
|
||||
}
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func readZip(zipFile string) (*packageFile, error) {
|
||||
r, err := zip.OpenReader(zipFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer r.Close()
|
||||
|
||||
p := &packageFile{Name: filepath.Base(zipFile), Contents: map[string]packageEntry{}}
|
||||
for _, f := range r.File {
|
||||
p.Contents[f.Name] = packageEntry{
|
||||
File: f.Name,
|
||||
Mode: f.Mode(),
|
||||
}
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
7
vendor/github.com/elastic/beats/dev-tools/packer/.gitignore
generated
vendored
Normal file
7
vendor/github.com/elastic/beats/dev-tools/packer/.gitignore
generated
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
*.swp
|
||||
*.swo
|
||||
/build/
|
||||
/env/
|
||||
|
||||
# copied over from xgo-image/
|
||||
docker/xgo-image-deb6/base/build.sh
|
127
vendor/github.com/elastic/beats/dev-tools/packer/Makefile
generated
vendored
Normal file
127
vendor/github.com/elastic/beats/dev-tools/packer/Makefile
generated
vendored
Normal file
@ -0,0 +1,127 @@
|
||||
BUILDID?=$(shell git rev-parse HEAD)
|
||||
SNAPSHOT?=yes
|
||||
|
||||
BEATS_BUILDER_IMAGE?=tudorg/beats-builder
|
||||
BEATS_BUILDER_DEB6_IMAGE?=tudorg/beats-builder-deb6
|
||||
BEATS_GOPATH=$(firstword $(subst :, ,${GOPATH}))
|
||||
|
||||
makefile_abspath:=$(abspath $(lastword $(MAKEFILE_LIST)))
|
||||
packer_absdir=$(shell dirname ${makefile_abspath})
|
||||
beat_abspath=${BEATS_GOPATH}/src/${BEAT_PATH}
|
||||
|
||||
|
||||
%/deb: ${BUILD_DIR}/god-linux-386 ${BUILD_DIR}/god-linux-amd64 fpm-image
|
||||
echo Creating DEB packages for $(@D)
|
||||
ARCH=386 BEAT=$(@D) BUILD_DIR=${BUILD_DIR} BEAT_PATH=$(beat_abspath) BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/debian/build.sh
|
||||
ARCH=amd64 BEAT=$(@D) BUILD_DIR=${BUILD_DIR} BEAT_PATH=$(beat_abspath) BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/debian/build.sh
|
||||
|
||||
%/rpm: ${BUILD_DIR}/god-linux-386 ${BUILD_DIR}/god-linux-amd64 fpm-image
|
||||
echo Creating RPM packages for $(@D)
|
||||
ARCH=386 BEAT=$(@D) BUILD_DIR=${BUILD_DIR} BEAT_PATH=$(beat_abspath) BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/centos/build.sh
|
||||
ARCH=amd64 BEAT=$(@D) BUILD_DIR=${BUILD_DIR} BEAT_PATH=$(beat_abspath) BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/centos/build.sh
|
||||
|
||||
%/darwin:
|
||||
echo Creating Darwin packages for $(@D)
|
||||
ARCH=amd64 BEAT=$(@D) BUILD_DIR=${BUILD_DIR} BEAT_PATH=$(beat_abspath) BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/darwin/build.sh
|
||||
|
||||
%/win:
|
||||
echo Creating Darwin packages for $(@D)
|
||||
ARCH=386 BEAT=$(@D) BUILD_DIR=${BUILD_DIR} BEAT_PATH=$(beat_abspath) BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/windows/build.sh
|
||||
ARCH=amd64 BEAT=$(@D) BUILD_DIR=${BUILD_DIR} BEAT_PATH=$(beat_abspath) BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/windows/build.sh
|
||||
|
||||
%/bin:
|
||||
echo Creating Linux packages for $(@D)
|
||||
ARCH=386 BEAT=$(@D) BUILD_DIR=${BUILD_DIR} BEAT_PATH=$(beat_abspath) BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/binary/build.sh
|
||||
ARCH=amd64 BEAT=$(@D) BUILD_DIR=${BUILD_DIR} BEAT_PATH=$(beat_abspath) BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/binary/build.sh
|
||||
|
||||
.PHONY: package-dashboards
|
||||
package-dashboards:
|
||||
echo Creating the Dashboards package
|
||||
BUILDID=$(BUILDID) SNAPSHOT=$(SNAPSHOT) $(packer_absdir)/platforms/dashboards/build.sh
|
||||
|
||||
.PHONY: deps
|
||||
deps:
|
||||
go get -u github.com/tsg/gotpl
|
||||
|
||||
.PHONY: xgo-image
|
||||
xgo-image:
|
||||
cd $(packer_absdir)/docker/xgo-image/; ./build.sh
|
||||
# copy build.sh script in the xgo-image-deb6 to avoid code duplication
|
||||
cp $(packer_absdir)/docker/xgo-image/base/build.sh $(packer_absdir)/docker/xgo-image-deb6/base/build.sh
|
||||
cd $(packer_absdir)/docker/xgo-image-deb6/; ./build.sh
|
||||
|
||||
.PHONY: fpm-image
|
||||
fpm-image:
|
||||
docker build --rm=true -t tudorg/fpm $(packer_absdir)/docker/fpm-image
|
||||
|
||||
.PHONY: go-daemon-image
|
||||
go-daemon-image:
|
||||
docker build --rm=true -t tudorg/go-daemon $(packer_absdir)/docker/go-daemon/
|
||||
|
||||
${BUILD_DIR}/god-linux-386 ${BUILD_DIR}/god-linux-amd64:
|
||||
docker run --rm -v ${BUILD_DIR}:/build tudorg/go-daemon
|
||||
|
||||
${BUILD_DIR}/upload:
|
||||
mkdir -p ${BUILD_DIR}/upload
|
||||
|
||||
${BUILD_DIR}/upload/build_id.txt:
|
||||
echo $(BUILDID) > ${BUILD_DIR}/upload/build_id.txt
|
||||
|
||||
# Build the image required for package-upload.
|
||||
.PHONY: deb-rpm-s3
|
||||
deb-rpm-s3:
|
||||
$(packer_absdir)/docker/deb-rpm-s3/build.sh
|
||||
|
||||
.PHONY: run-interactive-builder-deb6
|
||||
run-interactive-builder-deb6:
|
||||
docker run -t -i -v $(shell pwd)/build:/build \
|
||||
-v $(shell pwd)/xgo-scripts/:/scripts \
|
||||
-v $(shell pwd)/../..:/source \
|
||||
--entrypoint=bash ${BEATS_BUILDER_DEB6_IMAGE}
|
||||
|
||||
.PHONY: run-interactive-builder
|
||||
run-interactive-builder:
|
||||
docker run -t -i -v $(shell pwd)/build:/build \
|
||||
-v $(packer_absdir)/xgo-scripts/:/scripts \
|
||||
-v $(shell pwd)/../..:/source \
|
||||
--entrypoint=bash ${BEATS_BUILDER_IMAGE}
|
||||
|
||||
.PHONY: images
|
||||
images: xgo-image fpm-image go-daemon-image
|
||||
|
||||
.PHONY: push-images
|
||||
push-images:
|
||||
docker push ${BEATS_BUILDER_IMAGE}
|
||||
docker push ${BEATS_BUILDER_DEB6_IMAGE}
|
||||
docker push tudorg/fpm
|
||||
docker push tudorg/go-daemon
|
||||
|
||||
.PHONY: pull-images
|
||||
pull-images:
|
||||
docker pull ${BEATS_BUILDER_IMAGE}
|
||||
docker pull ${BEATS_BUILDER_DEB6_IMAGE}
|
||||
docker pull tudorg/fpm
|
||||
docker pull tudorg/go-daemon
|
||||
|
||||
|
||||
define rm-image =
|
||||
@echo "Cleaning $(1) image..."
|
||||
@if [ $(shell docker ps -n 1 -a -q --filter="image=$(1)" ) ]; then \
|
||||
docker stop $(shell docker ps -a -q --filter="image=$(1)"); \
|
||||
docker rm $(shell docker ps -a -q --filter="image=$(1)"); \
|
||||
fi; \
|
||||
\
|
||||
if [ $(shell docker images -q $(1)) ]; then \
|
||||
docker rmi $(1); \
|
||||
fi
|
||||
endef
|
||||
|
||||
|
||||
.PHONY: clean-images
|
||||
clean-images:
|
||||
@$(call rm-image, ${BEATS_BUILDER_DEB6_IMAGE})
|
||||
@$(call rm-image, ${BEATS_BUILDER_IMAGE})
|
||||
|
||||
.PHONY: clean
|
||||
clean:
|
||||
$(call rm-image,build-image)
|
96
vendor/github.com/elastic/beats/dev-tools/packer/README.md
generated
vendored
Normal file
96
vendor/github.com/elastic/beats/dev-tools/packer/README.md
generated
vendored
Normal file
@ -0,0 +1,96 @@
|
||||
[](https://travis-ci.org/elastic/beats-packer)
|
||||
|
||||
# Beats Packer
|
||||
|
||||
Tools, scripts and docker images for cross-compiling and packaging the Elastic
|
||||
[Beats](https://www.elastic.co/products/beats).
|
||||
|
||||
## Prepare
|
||||
|
||||
You need Go and docker installed. This project uses several docker files, you
|
||||
can either build them with:
|
||||
|
||||
make images
|
||||
|
||||
Or pull them from the Docker registry with:
|
||||
|
||||
make pull-images
|
||||
|
||||
Prepare the rest with:
|
||||
|
||||
make deps
|
||||
|
||||
## Cross-compile
|
||||
|
||||
The cross compilation part is based on [xgo](https://github.com/karalabe/xgo),
|
||||
with some [changes](https://github.com/tsg/xgo) that add a bit more
|
||||
extensibility that we needed for the Beats (e.g. static compiling, custom
|
||||
docker image).
|
||||
|
||||
You can cross-compile one Beat for all platforms with (e.g.):
|
||||
|
||||
make packetbeat
|
||||
|
||||
## Packaging
|
||||
|
||||
For each OS (named platform here) we execute a `build.sh` script which is
|
||||
free to do whatever it is required to build the proper packages for that
|
||||
platform. This can include running docker containers with the right tools
|
||||
included or with that OS installed for native packaging.
|
||||
|
||||
The deb and rpm creation is based on [fpm](https://github.com/jordansissel/fpm)
|
||||
which is executed from a container.
|
||||
|
||||
Besides the platform, there are three other dimensions: architecture,
|
||||
beat and the release. Each of these is defined by YAML files in their folders.
|
||||
These dimensions only set static options, the platforms is the only one
|
||||
scripted.
|
||||
|
||||
The runner is currently (ab)using a Makefile, which is nice because it can
|
||||
parallelize things automatically, but it's hacky so we might replace it in
|
||||
a future.
|
||||
|
||||
Building all Beats for all platforms:
|
||||
|
||||
make clean && make
|
||||
|
||||
## Naming conventions
|
||||
|
||||
We use a set of package name conventions across all the Elastic stack:
|
||||
|
||||
* The general form is `name-version-os-arch.ext`. Note that this means we
|
||||
use dashes even for Deb files.
|
||||
* The archs are called `x86` and `x64` except for deb/rpm where we keep the
|
||||
OS preferred names (i386/amd64, i686/x86_64).
|
||||
* For version strings like `5.0.0-alpha3` we use dashes in all filenames. The
|
||||
only exception is the RPM metadata (not the filename) where we replace the
|
||||
dash with an underscore (`5.0.0_alpha3`).
|
||||
* We omit the release number from the filenames. It's always `1` in the metadata.
|
||||
|
||||
For example, here are the artifacts created for Filebeat:
|
||||
|
||||
```
|
||||
filebeat-5.0.0-amd64.deb
|
||||
filebeat-5.0.0-darwin-x86_64.tar.gz
|
||||
filebeat-5.0.0-i386.deb
|
||||
filebeat-5.0.0-i686.rpm
|
||||
filebeat-5.0.0-linux-x86.tar.gz
|
||||
filebeat-5.0.0-linux-x86_64.tar.gz
|
||||
filebeat-5.0.0-windows-x86.zip
|
||||
filebeat-5.0.0-windows-x86_64.zip
|
||||
filebeat-5.0.0-x86_64.rpm
|
||||
```
|
||||
|
||||
And the SNAPSHOT versions:
|
||||
|
||||
```
|
||||
filebeat-5.0.0-SNAPSHOT-amd64.deb
|
||||
filebeat-5.0.0-SNAPSHOT-darwin-x86_64.tar.gz
|
||||
filebeat-5.0.0-SNAPSHOT-i386.deb
|
||||
filebeat-5.0.0-SNAPSHOT-i686.rpm
|
||||
filebeat-5.0.0-SNAPSHOT-linux-x86.tar.gz
|
||||
filebeat-5.0.0-SNAPSHOT-linux-x86_64.tar.gz
|
||||
filebeat-5.0.0-SNAPSHOT-windows-x86.zip
|
||||
filebeat-5.0.0-SNAPSHOT-windows-x86_64.zip
|
||||
filebeat-5.0.0-SNAPSHOT-x86_64.rpm
|
||||
```
|
5
vendor/github.com/elastic/beats/dev-tools/packer/archs/386.yml
generated
vendored
Normal file
5
vendor/github.com/elastic/beats/dev-tools/packer/archs/386.yml
generated
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
arch: '386'
|
||||
deb_arch: i386
|
||||
rpm_arch: i686
|
||||
bin_arch: x86
|
||||
win_arch: x86
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user