Merge branch '6.1'

This commit is contained in:
Blerim Sheqa 2017-12-27 10:12:39 +01:00
commit dd5751f335
5680 changed files with 611768 additions and 538668 deletions

View File

@ -6,8 +6,7 @@ services:
language: go
go:
- 1.7
- 1.8
- 1.9
os:
- linux

View File

@ -1,2 +1,3 @@
Blerim Sheqa <blerim.sheqa@icinga.com>
Dorian Lenzner <Dorian.Lenzner@telekom.de>
Michael Friedrich <michael.friedrich@icinga.com>

View File

@ -1,4 +1,17 @@
# Icingabeat CHANGELOG
## v6.1.1
### Features
* Update libbeat to version 6.1.1
* Add setting to add custom CAs for SSL verification
### Bugs
* Close connections properly on failed authentication
## v5.6.6
### Features
* Update libbeat to version 5.6.6
## v1.1.1

155
README.md
View File

@ -12,155 +12,12 @@ Icingabeat is an [Elastic Beat](https://www.elastic.co/products/beats) that
fetches data from the Icinga 2 API and sends it either directly to Elasticsearch
or Logstash.
![icingabeat-checkresult-dashboard](screenshots/icingabeat-checkresults-dashboard.png)
![icingabeat-checkresult-dashboard](screenshots/checkresults.png)
## Eventstream
Receive an eventstream from the Icinga 2 API. This stream includes events such
as checkresults, notifications, downtimes, acknowledgemts and many other types.
See below for details. There is no polling involved when receiving an
eventstream.
Example use cases:
* Correlate monitoring data with logging information
* Monitor notifications sent by Icinga 2
## Statuspoller
The Icinga 2 API exports a lot of information about the state of the Icinga
daemon. Icingabeat can poll these information periodically.
Example use cases:
* Visualize metrics of the Icinga 2 daemon
* Get insights how each enable Icinga 2 feature performs
* Information about zones and endpoints
### Installation
Download and install your package from the
[latest release](https://github.com/Icinga/icingabeat/releases/latest) page.
### Configuration
Configuration of Icingabeat is split into 3 sections: General, Evenstream and
Statuspoller. On Linux configuration files are located at `/etc/icingabeat`
#### General
Settings in this section apply to both modes.
##### `host`
Hostname of Icinga 2 API. This can be either an IP address or domain.
Defaults to `localhost`
##### `port`
Defaults to `5665`
##### `user`
Username to be used for the API connection. You need to create this user in your Icinga 2 configuration. Make sure that it has sufficient permissions to read the
data you want to collect.
Here is an example of an API user in your Icinga 2 configuration:
```c++
object ApiUser "icinga" {
password = "icinga"
permissions = ["events/*", "status/query"]
}
```
Learn more about the `ApiUser` and its permissions in the
[Icinga 2 docs](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api#icinga2-api-permissions).
##### `password`
Defaults to `icinga`
##### `skip_ssl_verify`
Skip verification of SSL certificates. Defaults to `false`
#### Eventstream
Settings in this section apply to the eventstream mode. To disable the
eventstream completely, comment out the section.
##### `types`
You can select which particular Icinga 2 events you want to receive and store.
The following types are available, you must set at least one:
* `CheckResult`
* `StateChange`
* `Notification`
* `AcknowledgementSet`
* `AcknowledgementCleared`
* `CommentAdded`
* `CommentRemoved`
* `DowntimeAdded`
* `DowntimeRemoved`
* `DowntimeStarted`
* `DowntimeTriggered`
To set multiple types, do the following:
```yaml
types:
- CheckResult
- StateChange
- Notification
- AcknowledgementSet
- AcknowledgementCleared
```
##### `filter`
In addition to selecting the types of events, you can filter them by
attributes using the prefix `event.`. By default no filter is set.
###### Examples
Only check results with the exit code 2:
```yaml
filter: "event.check_result.exit_status==2"
```
Only check results of services that match `mysql*`:
```yaml
filter: 'match("mysql*", event.service)'
```
##### `retry_interval`
On a connection loss Icingabeat will try to reconnect to the API periodically.
This setting defines the interval for connection retries. Defaults to `10s`
#### Statuspoller
Settings of this section apply to the statuspoller mode.
##### `interval`
Interval at which the status API is called. Set to `0` to disable polling.
Defaults to `60s`
### Run
On Linux systems, use one of the following commands to start Icingabeat:
* `service icingabeat start` or
* `systemctl icingabeat start` or
* `/etc/init.d/icingabeat start`
## Dashboards
We have dashboards prepared that you can use when getting started with
Icingabeat. They are meant to give you some inspiration before you start
exploring the data by yourself. Download the dashboards from the
[latest release](https://github.com/Icinga/icingabeat/releases/latest) page.
**Note:** The dashboards require Kibana >= 5.2.0
The tool to import dashboards with is already included in the Icingabeat package.
```
unzip icingabeat-dashboards-1.1.1.zip -d /tmp
/usr/share/icingabeat/scripts/import_dashboards -dir /tmp/icingabeat-dashboards-1.1.1 -es http://127.0.0.1:9200
```
## Fields
Icingabeat exports a bunch of fields. Have a look to the
[fields.asciidoc](docs/fields.asciidoc) for details.
## Documentation
Please read the documentation on
[icinga.com/docs/icingabeat/latest](https://www.icinga.com/docs/icingabeat/latest/)
for more information
## Development
@ -168,7 +25,7 @@ Icingabeat exports a bunch of fields. Have a look to the
#### Requirements
* [Golang](https://golang.org/dl/) 1.7
* [Golang](https://golang.org/dl/) 1.9
#### Clone

View File

@ -16,48 +16,58 @@ icingabeat:
# Password of the user
password: "icinga"
# Skip SSL verification
skip_ssl_verify: false
# Configure SSL verification. If `false` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `true`.
ssl.verify: true
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
########################### Icingabeat Eventstream ##########################
#
# Icingabeat supports capturing of an evenstream and periodical polling of the
# Icinga status data.
eventstream:
#
# Decide which events to receive from the event stream.
# The following event stream types are available:
#
# * CheckResult
# * StateChange
# * Notification
# * AcknowledgementSet
# * AcknowledgementCleared
# * CommentAdded
# * CommentRemoved
# * DowntimeAdded
# * DowntimeRemoved
# * DowntimeStarted
# * DowntimeTriggered
#
# To disable eventstream, leave the types empty or comment out the option
types:
- CheckResult
- StateChange
# Event streams can be filtered by attributes using the prefix 'event.'
#
# Example for the CheckResult type with the exit_code set to 2:
# filter: "event.check_result.exit_status==2"
#
# Example for the CheckResult type with the service matching the string
# pattern "mysql*":
# filter: 'match("mysql*", event.service)'
#
# To disable filtering set an empty string or comment out the filter option
filter: ""
# Decide which events to receive from the event stream.
# The following event stream types are available:
#
# * CheckResult
# * StateChange
# * Notification
# * AcknowledgementSet
# * AcknowledgementCleared
# * CommentAdded
# * CommentRemoved
# * DowntimeAdded
# * DowntimeRemoved
# * DowntimeStarted
# * DowntimeTriggered
#
# To disable eventstream, leave the types empty or comment out the option
eventstream.types:
- CheckResult
- StateChange
# Event streams can be filtered by attributes using the prefix 'event.'
#
# Example for the CheckResult type with the exit_code set to 2:
# filter: "event.check_result.exit_status==2"
#
# Example for the CheckResult type with the service matching the string
# pattern "mysql*":
# filter: 'match("mysql*", event.service)'
#
# To disable filtering set an empty string or comment out the filter option
eventstream.filter: ""
# Defines how fast to reconnect to the API on connection loss
retry_interval: 10s
eventstream.retry_interval: 10s
statuspoller:
# Interval at which the status API is called. Set to 0 to disable polling.
interval: 60s
########################### Icingabeat Statuspoller #########################
#
# Icingabeat can collect status information about Icinga 2 periodically. Set
# an interval at which the status API should be called. Set to 0 to disable
# polling.
statuspoller.interval: 60s

687
_meta/fields.generated.yml Normal file
View File

@ -0,0 +1,687 @@
- key: icingabeat
title: icingabeat
description: Data received from the Icinga 2 API
fields:
- name: timestamp
type: date
description: >
Timestamp of event occurrence
- name: type
type: keyword
description: >
Type of the document
- name: host
type: keyword
description: >
Host that triggered the event
- name: service
type: keyword
description: >
Service that triggered the event
- name: state
type: integer
description: >
State of the check
- name: state_type
type: integer
description: >
State type of the check
- name: author
type: keyword
description: >
Author of a message
- name: notification_type
type: keyword
description: >
Type of notification
- name: text
type: text
description: >
Text of a message
- name: users
type: keyword
description: >
Affected users of a notification
- name: acknowledgement_type
type: integer
description: >
Type of an acknowledgement
- name: expiry
type: date
description: >
Expiry of an acknowledgement
- name: notify
type: keyword
description: >
If has been sent out
- name: check_result.active
type: boolean
description: >
If check was active or passive
- name: check_result.check_source
type: keyword
description: >
Icinga instance that scheduled the check
- name: check_result.command
type: text
description: >
Command that was executed
- name: check_result.execution_end
type: date
description: >
Time when execution of check ended
- name: check_result.execution_start
type: date
description: >
Time when execution of check started
- name: check_result.exit_status
type: integer
description: >
Exit status
- name: check_result.output
type: text
description: >
Output of check
- name: check_result.performance_data
type: text
description: >
Performance data in text format
- name: check_result.schedule_end
type: date
description: >
Time when scheduling of the check ended
- name: check_result.schedule_start
type: date
description: >
Time when check was scheduled
- name: check_result.state
type: integer
description: >
State of the check
- name: check_result.type
type: keyword
description: >
Type of this event
- name: check_result.vars_after.attempt
type: integer
description: >
Check attempt after check execution
- name: check_result.vars_after.reachable
type: boolean
description: >
Reachable state after check execution
- name: check_result.vars_after.state
type: integer
description: >
State of the check after execution
- name: check_result.vars_after.state_type
type: integer
description: >
State type after execution
- name: check_result.vars_before.attempt
type: integer
description: >
Check attempt before check execution
- name: check_result.vars_before.reachable
type: boolean
description: >
Reachable state before check execution
- name: check_result.vars_before.state
type: integer
description: >
Check state before check execution
- name: check_result.vars_before.state_type
type: integer
description: >
State type before check execution
- name: comment.__name
type: text
description: >
Unique identifier of a comment
- name: comment.author
type: keyword
description: >
Author of a comment
- name: comment.entry_time
type: date
description: >
Entry time of a comment
- name: comment.entry_type
type: integer
description: >
Entry type of a comment
- name: comment.expire_time
type: date
description: >
Expire time of a comment
- name: comment.host_name
type: keyword
description: >
Host name of a comment
- name: comment.legacy_id
type: integer
description: >
Legacy ID of a comment
- name: comment.name
type: keyword
description: >
Identifier of a comment
- name: comment.package
type: keyword
description: >
Config package of a comment
- name: comment.service_name
type: keyword
description: >
Service name of a comment
- name: comment.templates
type: text
description: >
Templates used by a comment
- name: comment.text
type: text
description: >
Text of a comment
- name: comment.type
type: keyword
description: >
Comment type
- name: comment.version
type: keyword
description: >
Config version of comment object
- name: comment.zone
type: keyword
description: >
Zone where comment was generated
- name: downtime.__name
type: text
description: >
Unique identifier of a downtime
- name: downtime.author
type: keyword
description: >
Author of a downtime
- name: downtime.comment
type: text
description: >
Text of a downtime
- name: downtime.config_owner
type: text
description: >
Config owner
- name: downtime.duration
type: integer
description: >
Duration of a downtime
- name: downtime.end_time
type: date
description: >
Timestamp of downtime end
- name: downtime.entry_time
type: date
description: >
Timestamp when downtime was created
- name: downtime.fixed
type: boolean
description: >
If downtime is fixed or flexible
- name: downtime.host_name
type: keyword
description: >
Hostname of a downtime
- name: downtime.legacy_id
type: integer
description: >
The integer ID of a downtime
- name: downtime.name
type: keyword
description: >
Downtime config identifier
- name: downtime.package
type: keyword
description: >
Configuration package of downtime
- name: downtime.scheduled_by
type: text
description: >
By whom downtime was scheduled
- name: downtime.service_name
type: keyword
description: >
Service name of a downtime
- name: downtime.start_time
type: date
description: >
Timestamp when downtime starts
- name: downtime.templates
type: text
description: >
Templates used by this downtime
- name: downtime.trigger_time
type: date
description: >
Timestamp when downtime was triggered
- name: downtime.triggered_by
type: text
description: >
By whom downtime was triggered
- name: downtime.triggers
type: text
description: >
Downtime triggers
- name: downtime.type
type: keyword
description: >
Downtime type
- name: downtime.version
type: keyword
description: >
Config version of downtime
- name: downtime.was_cancelled
type: boolean
description: >
If downtime was cancelled
- name: downtime.zone
type: keyword
description: >
Zone of downtime
- name: status.active_host_checks
type: integer
description: >
Active host checks
- name: status.active_host_checks_15min
type: integer
description: >
Active host checks in the last 15 minutes
- name: status.active_host_checks_1min
type: integer
description: >
Acitve host checks in the last minute
- name: status.active_host_checks_5min
type: integer
description: >
Active host checks in the last 5 minutes
- name: status.active_service_checks
type: integer
description: >
Active service checks
- name: status.active_service_checks_15min
type: integer
description: >
Active service checks in the last 15 minutes
- name: status.active_service_checks_1min
type: integer
description: >
Active service checks in the last minute
- name: status.active_service_checks_5min
type: integer
description: >
Active service checks in the last 5 minutes
- name: status.api.identity
type: keyword
description: >
API identity
- name: status.api.num_conn_endpoints
type: integer
description: >
Number of connected endpoints
- name: status.api.num_endpoints
type: integer
description: >
Total number of endpoints
- name: status.api.num_not_conn_endpoints
type: integer
description: >
Number of not connected endpoints
- name: status.api.zones.demo.client_log_lag
type: integer
description: >
Lag of the replaylog
- name: status.api.zones.demo.connected
type: boolean
description: >
Zone connected
- name: status.api.zones.demo.endpoints
type: text
description: >
Endpoint names
- name: status.api.zones.demo.parent_zone
type: keyword
description: >
Parent zone
- name: status.avg_execution_time
type: integer
description: >
Average execution time of checks
- name: status.avg_latency
type: integer
description: >
Average latency time
- name: status.checkercomponent.checker.idle
type: integer
description: >
Idle checks
- name: status.checkercomponent.checker.pending
type: integer
description: >
Pending checks
- name: status.filelogger.main-log
type: integer
description: >
Mainlog enabled
- name: status.icingaapplication.app.enable_event_handlers
type: boolean
description: >
Event handlers enabled
- name: status.icingaapplication.app.enable_flapping
type: boolean
description: >
Flapping detection enabled
- name: status.icingaapplication.app.enable_host_checks
type: boolean
description: >
Host checks enabled
- name: status.icingaapplication.app.enable_notifications
type: boolean
description: >
Notifications enabled
- name: status.icingaapplication.app.enable_perfdata
type: boolean
description: >
Perfdata enabled
- name: status.icingaapplication.app.enable_service_checks
type: boolean
description: >
Service checks enabled
- name: status.icingaapplication.app.node_name
type: keyword
description: >
Node name
- name: status.icingaapplication.app.pid
type: integer
description: >
PID
- name: status.icingaapplication.app.program_start
type: integer
description: >
Time when Icinga started
- name: status.icingaapplication.app.version
type: keyword
description: >
Version
- name: status.idomysqlconnection.ido-mysql.connected
type: boolean
description: >
IDO connected
- name: status.idomysqlconnection.ido-mysql.instance_name
type: keyword
description: >
IDO Instance name
- name: status.idomysqlconnection.ido-mysql.query_queue_items
type: integer
description: >
IDO query items in the queue
- name: status.idomysqlconnection.ido-mysql.version
type: keyword
description: >
IDO schema version
- name: status.max_execution_time
type: integer
description: >
Max execution time
- name: status.max_latency
type: integer
description: >
Max latency
- name: status.min_execution_time
type: integer
description: >
Min execution time
- name: status.min_latency
type: integer
description: >
Min latency
- name: status.notificationcomponent.notification
type: integer
description: >
Notification
- name: status.num_hosts_acknowledged
type: integer
description: >
Amount of acknowledged hosts
- name: status.num_hosts_down
type: integer
description: >
Amount of down hosts
- name: status.num_hosts_flapping
type: integer
description: >
Amount of flapping hosts
- name: status.num_hosts_in_downtime
type: integer
description: >
Amount of hosts in downtime
- name: status.num_hosts_pending
type: integer
description: >
Amount of pending hosts
- name: status.num_hosts_unreachable
type: integer
description: >
Amount of unreachable hosts
- name: status.num_hosts_up
type: integer
description: >
Amount of hosts in up state
- name: status.num_services_acknowledged
type: integer
description: >
Amount of acknowledged services
- name: status.num_services_critical
type: integer
description: >
Amount of critical services
- name: status.num_services_flapping
type: integer
description: >
Amount of flapping services
- name: status.num_services_in_downtime
type: integer
description: >
Amount of services in downtime
- name: status.num_services_ok
type: integer
description: >
Amount of services in ok state
- name: status.num_services_pending
type: integer
description: >
Amount of pending services
- name: status.num_services_unknown
type: integer
description: >
Amount of unknown services
- name: status.num_services_unreachable
type: integer
description: >
Amount of unreachable services
- name: status.num_services_warning
type: integer
description: >
Amount of services in warning state
- name: status.passive_host_checks
type: integer
description: >
Amount of passive host checks
- name: status.passive_host_checks_15min
type: integer
description: >
Amount of passive host checks in the last 15 minutes
- name: status.passive_host_checks_1min
type: integer
description: >
Amount of passive host checks in the last minute
- name: status.passive_host_checks_5min
type: integer
description: >
Amount of passive host checks in the last 5 minutes
- name: status.passive_service_checks
type: integer
description: >
Amount of passive service checks
- name: status.passive_service_checks_15min
type: integer
description: >
Amount of passive service checks in the last 15 minutes
- name: status.passive_service_checks_1min
type: integer
description: >
Amount of passive service checks in the last minute
- name: status.passive_service_checks_5min
type: integer
description: >
Amount of passive service checks in the last 5 minutes
- name: status.uptime
type: integer
description: >
Uptime

View File

@ -48,7 +48,7 @@
Text of a message
- name: users
type: text
type: keyword
description: >
Affected users of a notification

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,114 @@
{
"objects": [
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
},
"savedSearchId": "eb7896b0-e4bd-11e7-b4d1-8383451ae5a4",
"title": "CheckResults by State",
"uiStateJSON": "{\"vis\":{\"colors\":{\"Ok\":\"#629E51\",\"Warning\":\"#E5AC0E\",\"Critical\":\"#BF1B00\",\"Unknown\":\"#962D82\"}}}",
"version": 1,
"visState": "{\"title\":\"CheckResults by State\",\"type\":\"histogram\",\"params\":{\"type\":\"histogram\",\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"}},\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"type\":\"category\",\"position\":\"bottom\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\"},\"labels\":{\"show\":true,\"truncate\":100},\"title\":{}}],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"name\":\"LeftAxis-1\",\"type\":\"value\",\"position\":\"left\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\",\"mode\":\"normal\"},\"labels\":{\"show\":true,\"rotate\":0,\"filter\":false,\"truncate\":100},\"title\":{\"text\":\"Count\"}}],\"seriesParams\":[{\"show\":\"true\",\"type\":\"histogram\",\"mode\":\"stacked\",\"data\":{\"label\":\"Count\",\"id\":\"1\"},\"valueAxis\":\"ValueAxis-1\",\"drawLinesBetweenPoints\":true,\"showCircles\":true}],\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"times\":[],\"addTimeMarker\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"filters\",\"schema\":\"group\",\"params\":{\"filters\":[{\"input\":{\"query\":\"check_result.state:0\"},\"label\":\"Ok\"},{\"input\":{\"query\":\"check_result.state:1\"},\"label\":\"Warning\"},{\"input\":{\"query\":\"check_result.state:3\"},\"label\":\"Critical\"},{\"input\":{\"query\":\"check_result.state:4\"},\"label\":\"Unknown\"}]}}]}"
},
"id": "a32bdf10-e4be-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:36.094Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
},
"savedSearchId": "eb7896b0-e4bd-11e7-b4d1-8383451ae5a4",
"title": "CheckResult Count",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"CheckResult Count\",\"type\":\"metric\",\"params\":{\"addTooltip\":true,\"addLegend\":false,\"type\":\"metric\",\"metric\":{\"percentageMode\":false,\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"metricColorMode\":\"None\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"labels\":{\"show\":true},\"invertColors\":false,\"style\":{\"bgFill\":\"#000\",\"bgColor\":false,\"labelColor\":false,\"subText\":\"\",\"fontSize\":60}}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"CheckResults received\"}}]}"
},
"id": "3bf26530-e4be-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:36.094Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
},
"savedSearchId": "eb7896b0-e4bd-11e7-b4d1-8383451ae5a4",
"title": "Hosts Tag Cloud",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Hosts Tag Cloud\",\"type\":\"tagcloud\",\"params\":{\"scale\":\"linear\",\"orientation\":\"single\",\"minFontSize\":18,\"maxFontSize\":72},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"host\",\"size\":50,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Hosts\"}}]}"
},
"id": "4a9d5c50-e4c0-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:36.094Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
},
"savedSearchId": "eb7896b0-e4bd-11e7-b4d1-8383451ae5a4",
"title": "Services Tag Cloud",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Services Tag Cloud\",\"type\":\"tagcloud\",\"params\":{\"scale\":\"linear\",\"orientation\":\"single\",\"minFontSize\":18,\"maxFontSize\":72},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"service\",\"size\":500,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Services\"}}]}"
},
"id": "6a23e300-e4c0-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:36.094Z",
"version": 1
},
{
"attributes": {
"columns": [
"_source"
],
"description": "",
"hits": 0,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\n \"index\": \"icingabeat-*\",\n \"highlightAll\": true,\n \"version\": true,\n \"query\": {\n \"language\": \"lucene\",\n \"query\": \"type:icingabeat.event.checkresult\"\n },\n \"filter\": []\n}"
},
"sort": [
"@timestamp",
"desc"
],
"title": "CheckResults",
"version": 1
},
"id": "eb7896b0-e4bd-11e7-b4d1-8383451ae5a4",
"type": "search",
"updated_at": "2017-12-27T07:51:40.826Z",
"version": 2
},
{
"attributes": {
"description": "Summary of check results received by Icinga",
"hits": 0,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"filter\":[],\"highlightAll\":true,\"version\":true}"
},
"optionsJSON": "{\"darkTheme\":false,\"useMargins\":true,\"hidePanelTitles\":false}",
"panelsJSON": "[{\"panelIndex\":\"1\",\"gridData\":{\"x\":0,\"y\":0,\"w\":12,\"h\":2,\"i\":\"1\"},\"version\":\"6.1.0\",\"type\":\"visualization\",\"id\":\"a32bdf10-e4be-11e7-b4d1-8383451ae5a4\"},{\"panelIndex\":\"2\",\"gridData\":{\"x\":0,\"y\":2,\"w\":3,\"h\":5,\"i\":\"2\"},\"version\":\"6.1.0\",\"type\":\"visualization\",\"id\":\"3bf26530-e4be-11e7-b4d1-8383451ae5a4\"},{\"panelIndex\":\"3\",\"gridData\":{\"x\":3,\"y\":2,\"w\":4,\"h\":5,\"i\":\"3\"},\"version\":\"6.1.0\",\"type\":\"visualization\",\"id\":\"4a9d5c50-e4c0-11e7-b4d1-8383451ae5a4\"},{\"panelIndex\":\"4\",\"gridData\":{\"x\":7,\"y\":2,\"w\":5,\"h\":5,\"i\":\"4\"},\"version\":\"6.1.0\",\"type\":\"visualization\",\"id\":\"6a23e300-e4c0-11e7-b4d1-8383451ae5a4\"}]",
"timeRestore": false,
"title": "Icingabeat-CheckResults",
"uiStateJSON": "{}",
"version": 1
},
"id": "34e97340-e4ce-11e7-b4d1-8383451ae5a4",
"type": "dashboard",
"updated_at": "2017-12-27T07:40:36.094Z",
"version": 1
}
],
"version": "6.1.0"
}

View File

@ -0,0 +1,134 @@
{
"objects": [
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"language\":\"lucene\",\"query\":\"\"}}"
},
"savedSearchId": "fa782860-e4bd-11e7-b4d1-8383451ae5a4",
"title": "Notification Types",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Notification Types\",\"type\":\"histogram\",\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"labels\":{\"show\":true,\"truncate\":100},\"position\":\"bottom\",\"scale\":{\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{},\"type\":\"category\"}],\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"}},\"legendPosition\":\"right\",\"seriesParams\":[{\"data\":{\"id\":\"1\",\"label\":\"Count\"},\"drawLinesBetweenPoints\":true,\"mode\":\"stacked\",\"show\":\"true\",\"showCircles\":true,\"type\":\"histogram\",\"valueAxis\":\"ValueAxis-1\"}],\"times\":[],\"type\":\"histogram\",\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"labels\":{\"filter\":false,\"rotate\":0,\"show\":true,\"truncate\":100},\"name\":\"LeftAxis-1\",\"position\":\"left\",\"scale\":{\"mode\":\"normal\",\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"Count\"},\"type\":\"value\"}]},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"notification_type\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}]}"
},
"id": "af54ac40-e4cd-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:37.107Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
},
"savedSearchId": "fa782860-e4bd-11e7-b4d1-8383451ae5a4",
"title": "Notification Types (Pie)",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Notification Types (Pie)\",\"type\":\"pie\",\"params\":{\"type\":\"pie\",\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":true,\"labels\":{\"show\":false,\"values\":true,\"last_level\":true,\"truncate\":100}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"notification_type\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}]}"
},
"id": "caabba10-e4cd-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:37.107Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\n \"filter\": [],\n \"query\": {\n \"query\": \"\",\n \"language\": \"lucene\"\n }\n}"
},
"savedSearchId": "fa782860-e4bd-11e7-b4d1-8383451ae5a4",
"title": "Notification Services",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\n \"title\": \"Notification Services\",\n \"type\": \"pie\",\n \"params\": {\n \"type\": \"pie\",\n \"addTooltip\": true,\n \"addLegend\": true,\n \"legendPosition\": \"right\",\n \"isDonut\": true,\n \"labels\": {\n \"show\": false,\n \"values\": true,\n \"last_level\": true,\n \"truncate\": 100\n }\n },\n \"aggs\": [\n {\n \"id\": \"1\",\n \"enabled\": true,\n \"type\": \"count\",\n \"schema\": \"metric\",\n \"params\": {}\n },\n {\n \"id\": \"2\",\n \"enabled\": true,\n \"type\": \"terms\",\n \"schema\": \"segment\",\n \"params\": {\n \"field\": \"service\",\n \"size\": 5,\n \"order\": \"desc\",\n \"orderBy\": \"1\"\n }\n }\n ]\n}"
},
"id": "fcb31150-e4ca-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:56:13.974Z",
"version": 2
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
},
"savedSearchId": "fa782860-e4bd-11e7-b4d1-8383451ae5a4",
"title": "Notification Hosts",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Notification Hosts\",\"type\":\"pie\",\"params\":{\"type\":\"pie\",\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":true,\"labels\":{\"show\":false,\"values\":true,\"last_level\":true,\"truncate\":100}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"host\",\"size\":500,\"order\":\"desc\",\"orderBy\":\"1\"}}]}"
},
"id": "e5a012a0-e4c6-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:56:02.651Z",
"version": 3
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\n \"filter\": [],\n \"query\": {\n \"query\": \"\",\n \"language\": \"lucene\"\n }\n}"
},
"savedSearchId": "fa782860-e4bd-11e7-b4d1-8383451ae5a4",
"title": "Notifications by User",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\n \"title\": \"Notifications by User\",\n \"type\": \"pie\",\n \"params\": {\n \"type\": \"pie\",\n \"addTooltip\": true,\n \"addLegend\": true,\n \"legendPosition\": \"right\",\n \"isDonut\": true,\n \"labels\": {\n \"show\": false,\n \"values\": true,\n \"last_level\": true,\n \"truncate\": 100\n }\n },\n \"aggs\": [\n {\n \"id\": \"1\",\n \"enabled\": true,\n \"type\": \"count\",\n \"schema\": \"metric\",\n \"params\": {}\n },\n {\n \"id\": \"2\",\n \"enabled\": true,\n \"type\": \"terms\",\n \"schema\": \"segment\",\n \"params\": {\n \"field\": \"users\",\n \"size\": 5,\n \"order\": \"desc\",\n \"orderBy\": \"1\"\n }\n }\n ]\n}"
},
"id": "e95ca140-e4cd-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:56:25.109Z",
"version": 2
},
{
"attributes": {
"columns": [
"host",
"service",
"users",
"text"
],
"description": "",
"hits": 0,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\n \"index\": \"icingabeat-*\",\n \"highlightAll\": true,\n \"version\": true,\n \"query\": {\n \"language\": \"lucene\",\n \"query\": \"type:icingabeat.event.notification\"\n },\n \"filter\": []\n}"
},
"sort": [
"@timestamp",
"desc"
],
"title": "Notifications",
"version": 1
},
"id": "fa782860-e4bd-11e7-b4d1-8383451ae5a4",
"type": "search",
"updated_at": "2017-12-27T07:51:48.494Z",
"version": 2
},
{
"attributes": {
"description": "Summary of notifications received by Icinga",
"hits": 0,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[],\"highlightAll\":true,\"version\":true}"
},
"optionsJSON": "{\"darkTheme\":false,\"hidePanelTitles\":false,\"useMargins\":true}",
"panelsJSON": "[{\"panelIndex\":\"1\",\"gridData\":{\"x\":0,\"y\":0,\"w\":12,\"h\":2,\"i\":\"1\"},\"id\":\"af54ac40-e4cd-11e7-b4d1-8383451ae5a4\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"panelIndex\":\"2\",\"gridData\":{\"x\":0,\"y\":2,\"w\":3,\"h\":2,\"i\":\"2\"},\"id\":\"caabba10-e4cd-11e7-b4d1-8383451ae5a4\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"panelIndex\":\"3\",\"gridData\":{\"x\":3,\"y\":2,\"w\":3,\"h\":2,\"i\":\"3\"},\"id\":\"fcb31150-e4ca-11e7-b4d1-8383451ae5a4\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"panelIndex\":\"4\",\"gridData\":{\"x\":6,\"y\":2,\"w\":3,\"h\":2,\"i\":\"4\"},\"id\":\"e5a012a0-e4c6-11e7-b4d1-8383451ae5a4\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"panelIndex\":\"5\",\"gridData\":{\"x\":9,\"y\":2,\"w\":3,\"h\":2,\"i\":\"5\"},\"id\":\"e95ca140-e4cd-11e7-b4d1-8383451ae5a4\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"panelIndex\":\"6\",\"gridData\":{\"x\":0,\"y\":4,\"w\":12,\"h\":11,\"i\":\"6\"},\"version\":\"6.1.0\",\"type\":\"search\",\"id\":\"fa782860-e4bd-11e7-b4d1-8383451ae5a4\"}]",
"timeRestore": false,
"title": "Icingabeat-Notifications",
"uiStateJSON": "{}",
"version": 1
},
"id": "ed031e90-e4ce-11e7-b4d1-8383451ae5a4",
"type": "dashboard",
"updated_at": "2017-12-27T07:40:37.107Z",
"version": 1
}
],
"version": "6.1.0"
}

View File

@ -0,0 +1,209 @@
{
"objects": [
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
},
"title": "Icinga Logo",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Icinga Logo\",\"type\":\"markdown\",\"params\":{\"fontSize\":12,\"markdown\":\"![Icinga Logo](https://www.icinga.com/wp-content/uploads/2014/06/icinga_logo.png)\"},\"aggs\":[]}"
},
"id": "77052890-e4c0-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
},
"savedSearchId": "091fd610-e4be-11e7-b4d1-8383451ae5a4",
"title": "Icinga Version",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Icinga Version\",\"type\":\"pie\",\"params\":{\"type\":\"pie\",\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"top\",\"isDonut\":true,\"labels\":{\"show\":false,\"values\":true,\"last_level\":true,\"truncate\":100}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"status.icingaapplication.app.version\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"_term\"}}]}"
},
"id": "bebb81b0-e4c1-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
},
"savedSearchId": "091fd610-e4be-11e7-b4d1-8383451ae5a4",
"title": "MySQL Schema Version",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"MySQL Schema Version\",\"type\":\"pie\",\"params\":{\"type\":\"pie\",\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"top\",\"isDonut\":true,\"labels\":{\"show\":false,\"values\":true,\"last_level\":true,\"truncate\":100}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"status.idomysqlconnection.ido-mysql.version\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"_term\"}}]}"
},
"id": "73cd6b40-e4c2-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
},
"savedSearchId": "091fd610-e4be-11e7-b4d1-8383451ae5a4",
"title": "Nodes",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Nodes\",\"type\":\"pie\",\"params\":{\"type\":\"pie\",\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"top\",\"isDonut\":true,\"labels\":{\"show\":false,\"values\":true,\"last_level\":true,\"truncate\":100}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"status.icingaapplication.app.node_name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"_term\"}}]}"
},
"id": "b37471e0-e4c6-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
},
"title": "Hostchecks by time",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Hostchecks by time\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.active_host_checks_1min').color(#070F4E).label(\\\"1 min\\\").title(\\\"Amount of Hostchecks\\\"),.es(metric='avg:status.active_host_checks_5min').color(#2772DB).label(\\\"5 min\\\"),.es(metric='avg:status.active_host_checks_15min').color(#3AB1C8).label(\\\"15 min\\\")\",\"interval\":\"1m\"},\"aggs\":[]}"
},
"id": "16cd5a60-e4c0-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
},
"title": "Servicechecks by time",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Servicechecks by time\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.active_service_checks_1min').color(#070F4E).label(\\\"1 min\\\").title(\\\"Amount of Servicechecks\\\"),.es(metric='avg:status.active_service_checks_5min').color(#2772DB).label(\\\"5 min\\\"),.es(metric='avg:status.active_service_checks_15min').color(#3AB1C8).label(\\\"15 min\\\")\",\"interval\":\"1m\"},\"aggs\":[]}"
},
"id": "fbb4acc0-e4cd-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
},
"title": "Endpoints comparisson",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"Endpoints comparisson\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.api.num_endpoints').label(\\\"Endpoints\\\"), .es(metric='avg:status.api.num_not_conn_endpoints').label(\\\"Endpoints not connected\\\").title(\\\"Connected Endpoints\\\")\",\"interval\":\"1m\"},\"aggs\":[]}"
},
"id": "0c0685d0-e4bf-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
},
"title": "States of Hosts",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"States of Hosts\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.num_hosts_up').color(#3EC8AC).label(\\\"Up\\\").title(\\\"States of Hosts\\\"),.es(metric='avg:status.num_hosts_down').color(#E94822).label(\\\"Down\\\"),.es(metric='avg:status.num_hosts_unreachable').color(#6E60A0).label(\\\"Unreachable\\\")\",\"interval\":\"1m\"},\"aggs\":[]}"
},
"id": "0d44fb70-e4ce-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
},
"title": "States of Services",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"States of Services\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:status.num_services_ok').color(#3EC8AC).label(\\\"Ok\\\").title(\\\"States of Services\\\"),.es(metric='avg:status.num_services_warning').color(#F2910A).label(\\\"Warning\\\"),.es(metric='avg:status.num_services_critical').color(#E94822).label(\\\"Critical\\\")\",\"interval\":\"1m\"},\"aggs\":[]}"
},
"id": "204750b0-e4ce-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{}"
},
"title": "MySQL Queries",
"uiStateJSON": "{}",
"version": 1,
"visState": "{\"title\":\"MySQL Queries\",\"type\":\"timelion\",\"params\":{\"expression\":\".es(metric='avg:perfdata.idomysqlconnection_ido-mysql_queries_1min.value').color(#616EEF).label(\\\"1 min\\\").title(\\\"MySQL Queries\\\"), .es(metric='avg:perfdata.idomysqlconnection_ido-mysql_queries_5mins.value').color(#09A8FA).label(\\\"5 min\\\"), .es(metric='avg:perfdata.idomysqlconnection_ido-mysql_queries_15mins.value').color(#41C5D3).label(\\\"15 min\\\")\",\"interval\":\"1m\"},\"aggs\":[]}"
},
"id": "4d4cda00-e4c2-11e7-b4d1-8383451ae5a4",
"type": "visualization",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
},
{
"attributes": {
"columns": [
"_source"
],
"description": "",
"hits": 0,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\n \"index\": \"icingabeat-*\",\n \"highlightAll\": true,\n \"version\": true,\n \"query\": {\n \"language\": \"lucene\",\n \"query\": \"type:icingabeat.status*\"\n },\n \"filter\": []\n}"
},
"sort": [
"@timestamp",
"desc"
],
"title": "Statuspoller",
"version": 1
},
"id": "091fd610-e4be-11e7-b4d1-8383451ae5a4",
"type": "search",
"updated_at": "2017-12-27T07:51:55.982Z",
"version": 2
},
{
"attributes": {
"description": "Summary of Icinga Metrics",
"hits": 0,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[],\"highlightAll\":true,\"version\":true}"
},
"optionsJSON": "{\"darkTheme\":false,\"hidePanelTitles\":false,\"useMargins\":true}",
"panelsJSON": "[{\"gridData\":{\"h\":2,\"i\":\"1\",\"w\":3,\"x\":0,\"y\":0},\"id\":\"77052890-e4c0-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"1\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"gridData\":{\"h\":2,\"i\":\"2\",\"w\":3,\"x\":3,\"y\":0},\"id\":\"bebb81b0-e4c1-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"2\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"gridData\":{\"h\":2,\"i\":\"3\",\"w\":3,\"x\":6,\"y\":0},\"id\":\"73cd6b40-e4c2-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"3\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"gridData\":{\"h\":2,\"i\":\"4\",\"w\":3,\"x\":9,\"y\":0},\"id\":\"b37471e0-e4c6-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"4\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"gridData\":{\"h\":3,\"i\":\"5\",\"w\":4,\"x\":0,\"y\":2},\"id\":\"16cd5a60-e4c0-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"5\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"gridData\":{\"h\":3,\"i\":\"6\",\"w\":4,\"x\":4,\"y\":2},\"id\":\"fbb4acc0-e4cd-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"6\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"gridData\":{\"h\":3,\"i\":\"7\",\"w\":4,\"x\":8,\"y\":2},\"id\":\"0c0685d0-e4bf-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"7\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"gridData\":{\"h\":3,\"i\":\"8\",\"w\":4,\"x\":0,\"y\":5},\"id\":\"0d44fb70-e4ce-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"8\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"gridData\":{\"h\":3,\"i\":\"9\",\"w\":4,\"x\":4,\"y\":5},\"id\":\"204750b0-e4ce-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"9\",\"type\":\"visualization\",\"version\":\"6.1.0\"},{\"gridData\":{\"h\":3,\"i\":\"10\",\"w\":4,\"x\":8,\"y\":5},\"id\":\"4d4cda00-e4c2-11e7-b4d1-8383451ae5a4\",\"panelIndex\":\"10\",\"type\":\"visualization\",\"version\":\"6.1.0\"}]",
"timeRestore": false,
"title": "Icingabeat-Status",
"uiStateJSON": "{}",
"version": 1
},
"id": "a13f1a80-e4cf-11e7-b4d1-8383451ae5a4",
"type": "dashboard",
"updated_at": "2017-12-27T07:40:38.128Z",
"version": 1
}
],
"version": "6.1.0"
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -12,6 +12,7 @@ import (
"github.com/icinga/icingabeat/config"
"github.com/elastic/beats/libbeat/beat"
"github.com/elastic/beats/libbeat/common"
"github.com/elastic/beats/libbeat/logp"
)
@ -37,53 +38,52 @@ func NewEventstream(bt *Icingabeat, cfg config.Config) *Eventstream {
}
// BuildEventstreamEvent ...
func BuildEventstreamEvent(e []byte) common.MapStr {
func BuildEventstreamEvent(e []byte) beat.Event {
var event common.MapStr
var event beat.Event
var icingaEvent map[string]interface{}
if err := json.Unmarshal(e, &icingaEvent); err != nil {
logp.Warn("Error decoding event: %v", err)
}
event = common.MapStr{
"@timestamp": common.Time(time.Now()),
}
event.Timestamp = time.Now()
event.Fields = common.MapStr{}
for key, value := range icingaEvent {
event.Put(key, value)
event.Fields.Put(key, value)
}
logp.Debug("icingabeat", "Type: %v", icingaEvent["type"])
switch icingaEvent["type"] {
case "CheckResult", "StateChange", "Notification":
checkResult := icingaEvent["check_result"].(map[string]interface{})
event.Put("check_result.execution_start", FloatToTimestamp(checkResult["execution_start"].(float64)))
event.Put("check_result.execution_end", FloatToTimestamp(checkResult["execution_end"].(float64)))
event.Put("check_result.schedule_start", FloatToTimestamp(checkResult["schedule_start"].(float64)))
event.Put("check_result.schedule_end", FloatToTimestamp(checkResult["schedule_end"].(float64)))
event.Delete("check_result.performance_data")
event.Fields.Put("check_result.execution_start", FloatToTimestamp(checkResult["execution_start"].(float64)))
event.Fields.Put("check_result.execution_end", FloatToTimestamp(checkResult["execution_end"].(float64)))
event.Fields.Put("check_result.schedule_start", FloatToTimestamp(checkResult["schedule_start"].(float64)))
event.Fields.Put("check_result.schedule_end", FloatToTimestamp(checkResult["schedule_end"].(float64)))
event.Fields.Delete("check_result.performance_data")
case "AcknowledgementSet":
event.Delete("comment")
event.Put("comment.text", icingaEvent["comment"])
event.Put("expiry", FloatToTimestamp(icingaEvent["expiry"].(float64)))
event.Fields.Put("comment.text", icingaEvent["comment"])
event.Fields.Put("expiry", FloatToTimestamp(icingaEvent["expiry"].(float64)))
case "CommentAdded", "CommentRemoved":
comment := icingaEvent["comment"].(map[string]interface{})
event.Put("comment.entry_time", FloatToTimestamp(comment["entry_time"].(float64)))
event.Put("comment.expire_time", FloatToTimestamp(comment["expire_time"].(float64)))
event.Fields.Put("comment.entry_time", FloatToTimestamp(comment["entry_time"].(float64)))
event.Fields.Put("comment.expire_time", FloatToTimestamp(comment["expire_time"].(float64)))
case "DowntimeAdded", "DowntimeRemoved", "DowntimeStarted", "DowntimeTriggered":
downtime := icingaEvent["downtime"].(map[string]interface{})
event.Put("downtime.end_time", FloatToTimestamp(downtime["end_time"].(float64)))
event.Put("downtime.entry_time", FloatToTimestamp(downtime["entry_time"].(float64)))
event.Put("downtime.start_time", FloatToTimestamp(downtime["start_time"].(float64)))
event.Put("downtime.trigger_time", FloatToTimestamp(downtime["trigger_time"].(float64)))
event.Fields.Put("downtime.end_time", FloatToTimestamp(downtime["end_time"].(float64)))
event.Fields.Put("downtime.entry_time", FloatToTimestamp(downtime["entry_time"].(float64)))
event.Fields.Put("downtime.start_time", FloatToTimestamp(downtime["start_time"].(float64)))
event.Fields.Put("downtime.trigger_time", FloatToTimestamp(downtime["trigger_time"].(float64)))
}
event.Put("type", "icingabeat.event."+strings.ToLower(icingaEvent["type"].(string)))
event.Put("timestamp", FloatToTimestamp(icingaEvent["timestamp"].(float64)))
event.Fields.Put("type", "icingabeat.event."+strings.ToLower(icingaEvent["type"].(string)))
event.Fields.Put("timestamp", FloatToTimestamp(icingaEvent["timestamp"].(float64)))
return event
}
@ -147,7 +147,7 @@ func (es *Eventstream) Run() error {
logp.Err("Error reading line %#v", err)
}
es.icingabeat.client.PublishEvent(BuildEventstreamEvent(line))
es.icingabeat.client.Publish(BuildEventstreamEvent(line))
logp.Debug("icingabeat.eventstream", "Event sent")
}
@ -162,6 +162,7 @@ func (es *Eventstream) Run() error {
select {
case <-es.done:
defer response.Body.Close()
return nil
case <-ticker.C:
}

View File

@ -2,16 +2,46 @@ package beater
import (
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"time"
"github.com/elastic/beats/libbeat/logp"
)
func requestURL(bt *Icingabeat, method string, URL *url.URL) (*http.Response, error) {
var skipSslVerify bool
certPool := x509.NewCertPool()
if bt.config.SSL.Verify {
skipSslVerify = false
for _, ca := range bt.config.SSL.CertificateAuthorities {
cert, err := ioutil.ReadFile(ca)
if err != nil {
logp.Warn("Could not load certificate: %v", err)
}
certPool.AppendCertsFromPEM(cert)
}
} else {
skipSslVerify = true
}
fmt.Print(bt.config.SSL.CertificateAuthorities)
tlsConfig := &tls.Config{
InsecureSkipVerify: skipSslVerify,
RootCAs: certPool,
}
transport := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: bt.config.SkipSSLVerify},
TLSClientConfig: tlsConfig,
MaxIdleConns: 10,
IdleConnTimeout: 30 * time.Second,
}
client := &http.Client{
@ -37,6 +67,7 @@ func requestURL(bt *Icingabeat, method string, URL *url.URL) (*http.Response, er
switch response.StatusCode {
case 401:
err = errors.New("Authentication failed for user " + bt.config.User)
defer response.Body.Close()
}
return response, err

View File

@ -6,7 +6,6 @@ import (
"github.com/elastic/beats/libbeat/beat"
"github.com/elastic/beats/libbeat/common"
"github.com/elastic/beats/libbeat/logp"
"github.com/elastic/beats/libbeat/publisher"
"github.com/icinga/icingabeat/config"
)
@ -15,7 +14,7 @@ import (
type Icingabeat struct {
done chan struct{}
config config.Config
client publisher.Client
client beat.Client
}
// New beater
@ -35,7 +34,12 @@ func New(b *beat.Beat, cfg *common.Config) (beat.Beater, error) {
// Run Icingabeat
func (bt *Icingabeat) Run(b *beat.Beat) error {
logp.Info("icingabeat is running! Hit CTRL-C to stop it.")
bt.client = b.Publisher.Connect()
var err error
bt.client, err = b.Publisher.Connect()
if err != nil {
return err
}
if len(bt.config.Eventstream.Types) > 0 {
var eventstream *Eventstream
@ -43,6 +47,7 @@ func (bt *Icingabeat) Run(b *beat.Beat) error {
go eventstream.Run()
}
fmt.Print(bt.config.Statuspoller.Interval)
if bt.config.Statuspoller.Interval > 0 {
var statuspoller *Statuspoller
statuspoller = NewStatuspoller(bt, bt.config)

View File

@ -10,6 +10,7 @@ import (
"github.com/icinga/icingabeat/config"
"github.com/elastic/beats/libbeat/beat"
"github.com/elastic/beats/libbeat/common"
"github.com/elastic/beats/libbeat/logp"
)
@ -33,8 +34,8 @@ func NewStatuspoller(bt *Icingabeat, cfg config.Config) *Statuspoller {
}
// BuildStatusEvents ...
func BuildStatusEvents(body []byte) []common.MapStr {
var statusEvents []common.MapStr
func BuildStatusEvents(body []byte) []beat.Event {
var statusEvents []beat.Event
var icingaStatus map[string]interface{}
if err := json.Unmarshal(body, &icingaStatus); err != nil {
@ -44,7 +45,9 @@ func BuildStatusEvents(body []byte) []common.MapStr {
for _, result := range icingaStatus {
for _, status := range result.([]interface{}) {
event := common.MapStr{}
var event beat.Event
event.Fields = common.MapStr{}
event.Timestamp = time.Now()
for key, value := range status.(map[string]interface{}) {
switch key {
@ -53,12 +56,11 @@ func BuildStatusEvents(body []byte) []common.MapStr {
switch statusvalue.(type) {
case map[string]interface{}:
if len(statusvalue.(map[string]interface{})) > 0 {
event.Put(key, value)
event.Fields.Put(key, value)
}
default:
event.Put(key, value)
event.Fields.Put(key, value)
}
}
@ -72,7 +74,7 @@ func BuildStatusEvents(body []byte) []common.MapStr {
case interface{}:
key = "perfdata." + perfdata.(map[string]interface{})["label"].(string)
value = perfdata
event.Put(key, value)
event.Fields.Put(key, value)
}
}
@ -80,15 +82,14 @@ func BuildStatusEvents(body []byte) []common.MapStr {
case "name":
key = "type"
value = "icingabeat.status." + strings.ToLower(value.(string))
event.Put(key, value)
event.Fields.Put(key, value)
default:
event.Put(key, value)
event.Fields.Put(key, value)
}
}
if statusAvailable, _ := event.HasKey("status"); statusAvailable == true {
event.Put("@timestamp", common.Time(time.Now()))
if statusAvailable, _ := event.Fields.HasKey("status"); statusAvailable == true {
statusEvents = append(statusEvents, event)
}
}
@ -120,7 +121,7 @@ func (sp *Statuspoller) Run() error {
}
processedStatusEvents := BuildStatusEvents(body)
sp.icingabeat.client.PublishEvents(processedStatusEvents)
sp.icingabeat.client.PublishAll(processedStatusEvents)
logp.Debug("icingabeat.statuspoller", "Events sent: %v", len(processedStatusEvents))
} else {
@ -129,9 +130,11 @@ func (sp *Statuspoller) Run() error {
select {
case <-sp.done:
defer response.Body.Close()
return nil
case <-ticker.C:
}
}
}

13
cmd/root.go Normal file
View File

@ -0,0 +1,13 @@
package cmd
import (
"github.com/icinga/icingabeat/beater"
cmd "github.com/elastic/beats/libbeat/cmd"
)
// Name of this beat
var Name = "icingabeat"
// RootCmd to handle beats cli
var RootCmd = cmd.GenRootCmd(Name, "", beater.New)

View File

@ -3,17 +3,25 @@
package config
import "time"
import (
"time"
)
// Config options
type Config struct {
Host string `config:"host"`
Port int `config:"port"`
User string `config:"user"`
Password string `config:"password"`
SkipSSLVerify bool `config:"skip_ssl_verify"`
Eventstream EventstreamConfig `config:"eventstream"`
Statuspoller StatuspollerConfig `config:"statuspoller"`
Host string `config:"host"`
Port int `config:"port"`
User string `config:"user"`
Password string `config:"password"`
SSL SSL `config:"ssl"`
Eventstream EventstreamConfig `config:"eventstream"`
Statuspoller StatuspollerConfig `config:"statuspoller"`
}
// SSL options
type SSL struct {
Verify bool `config:"verify"`
CertificateAuthorities []string `config:"certificate_authorities"`
}
// EventstreamConfig optoins

9
dashboards.yml Normal file
View File

@ -0,0 +1,9 @@
dashboards:
- id: 34e97340-e4ce-11e7-b4d1-8383451ae5a4
file: Icingabeat-CheckRestuls.json
- id: ed031e90-e4ce-11e7-b4d1-8383451ae5a4
file: Icingabeat-Notifications.json
- id: a13f1a80-e4cf-11e7-b4d1-8383451ae5a4
file: Icingabeat-Status.json

1
data/meta.json Normal file
View File

@ -0,0 +1 @@
{"uuid":"0409fabd-956a-4000-9090-22c9c0b438af"}

38
docs/01-about.md Normal file
View File

@ -0,0 +1,38 @@
# Icingabeat
Icingabeat is an [Elastic Beat](https://www.elastic.co/products/beats) that
fetches data from the Icinga 2 API and sends it either directly to Elasticsearch
or Logstash.
> The Beats are lightweight data shippers, written in Go, that you install on
> your servers to capture all sorts of operational data (think of logs,
> metrics, or network packet data). The Beats send the operational data to
> Elasticsearch, either directly or via Logstash, so it can be visualized with
> Kibana.
![CheckResults](../screenshots/checkresults.png) | ![Status](../screenshots/status.png)
-------------------------------------------------|-------------------------------------
## Eventstream
Icingabeat listens to an eventstream published by the Icinga 2 API. This stream
includes detailed information about events, such as checkresults, notifications,
downtimes, acknowledgemts and many other event types. There is no polling
involved in this mode. The configuration section describes how to limit the
amount of data you receive by setting types and filters.
Example use cases:
* Correlate monitoring data with logging information
* Retrace notifications sent by Icinga 2
* Find bottlenecks in execution time and latency of service checks
## Statuspoller
The Icinga 2 API exports a lot of information about the state of the Icinga 2
daemon. Icingabeat can poll these information periodically.
Example use cases:
* Visualize metrics of the Icinga 2 daemon
* Get insights how each Icinga 2 feature performs
* Information about zones and endpoints
* Compare Icinga servers with each other

101
docs/02-installation.md Normal file
View File

@ -0,0 +1,101 @@
# Installation
## Packages
Packages are available on [packages.icinga.com](https://packages.icinga.com).
Depending on your distribution and version you need to run one of the following
commands:
#### Debian
``` shell
wget -O - https://packages.icinga.com/icinga.key | apt-key add -
echo 'deb http://packages.icinga.com/debian icinga-stretch main' > etc/apt/sources.list.d/icinga.list
```
``` shell
apt-get update
apt-get install icingabeat
```
#### Ubuntu
``` shell
wget -O - https://packages.icinga.com/icinga.key | apt-key add -
echo 'deb http://packages.icinga.com/ubuntu icinga-xenial main' > etc/apt/sources.list.d/icinga.list
```
``` shell
apt-get update
apt-get install icingabeat
```
#### CentOS
``` shell
yum install epel-release
rpm --import https://packages.icinga.com/icinga.key
yum install https://packages.icinga.com/epel/icinga-rpm-release-7-latest.noarch.rpm
```
``` shell
yum install icingabeat
```
#### RHEL
``` shell
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm --import https://packages.icinga.com/icinga.key
yum install https://packages.icinga.com/epel/icinga-rpm-release-7-latest.noarch.rpm
```
``` shell
yum install icingabeat
```
#### SLES
``` shell
rpm --import https://packages.icinga.com/icinga.key
zypper ar https://packages.icinga.com/SUSE/ICINGA-release.repo
zypper ref
```
``` shell
zypper install icingabeat
```
### Run
Make sure you have configured Icingabeat properly before starting it. Use one
of the following commands to start Icingabeat:
* `service icingabeat start` or
* `systemctl icingabeat start` or
* `/etc/init.d/icingabeat start`
## Dashboards
We have dashboards prepared that you can use when getting started with
Icingabeat. They are meant to give you some inspiration before you start
exploring the data by yourself.
**Note:** The dashboards require Kibana >= 6.0
Import dashboards and index pattern:
``` shell
icingabeat setup
```
Set Kibana host, user and password if necessary:
``` shell
icingabeat setup -E setup.kibana.host=127.0.0.1:5601 -E setup.kibana.username=elastic -E setup.kibana.password=secret
```
## Manual Installation
Download and install a package or tarball from the
[latest release](https://github.com/Icinga/icingabeat/releases/latest) page.
## Development
Please follow [README.md](https://github.com/icinga/icingabeat/README.md) for
instructions about how to build icingabeat.

109
docs/03-configuration.md Normal file
View File

@ -0,0 +1,109 @@
# Configuration
Configuration of Icingabeat is split into 3 sections: Connection, Evenstream and
Statuspoller. On Linux configuration files are located at `/etc/icingabeat`
## Connection
Settings in this section apply to both modes.
### `host`
Hostname of Icinga 2 API. This can be either an IP address or domain.
Defaults to `localhost`
### `port`
Defaults to `5665`
### `user`
Username to be used for the API connection. You need to create this user in your Icinga 2 configuration. Make sure that it has sufficient permissions to read the
data you want to collect.
Here is an example of an API user in your Icinga 2 configuration:
``` c++
object ApiUser "icinga" {
password = "icinga"
permissions = ["events/*", "status/query"]
}
```
Learn more about the `ApiUser` and its permissions in the
[Icinga 2 docs](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api#icinga2-api-permissions).
### `password`
Defaults to `icinga`
### `ssl.verify`
Configure SSL verification. If `false` is configured, all server hosts and
certificates will be accepted. In this mode, SSL based connections are
susceptible to man-in-the-middle attacks. Use only for testing. Default is
`true`.
### `ssl.certificate_authorities`
List of root certificates for HTTPS server verifications
Example:
```
ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
```
## Eventstream
Settings in this section apply to the eventstream mode. To disable the
eventstream completely, comment out the section.
### `eventstream.types`
You can select which particular Icinga 2 events you want to receive and store.
The following types are available, you must set at least one:
* `CheckResult`
* `StateChange`
* `Notification`
* `AcknowledgementSet`
* `AcknowledgementCleared`
* `CommentAdded`
* `CommentRemoved`
* `DowntimeAdded`
* `DowntimeRemoved`
* `DowntimeStarted`
* `DowntimeTriggered`
To set multiple types, do the following:
```yaml
eventstream.types:
- CheckResult
- StateChange
- Notification
- AcknowledgementSet
- AcknowledgementCleared
```
### `eventstream.filter`
In addition to selecting the types of events, you can filter them by
attributes using the prefix `event.`. By default no filter is set.
###### Examples
Only check results with the exit code 2:
```yaml
eventstream.filter: "event.check_result.exit_status==2"
```
Only check results of services that match `mysql*`:
```yaml
eventstream.filter: 'match("mysql*", event.service)'
```
### `eventstream.retry_interval`
On a connection loss Icingabeat will try to reconnect to the API periodically.
This setting defines the interval for connection retries. Defaults to `10s`
## Statuspoller
Settings of this section apply to the statuspoller mode.
### `statuspoller.interval`
Interval at which the status API is called. Set to `0` to disable polling.
Defaults to `60s`
## Fields
Icingabeat exports a bunch of fields. Have a look to the
[fields.asciidoc](https://github.com/Icinga/icingabeat/blob/master/docs/fields.asciidoc) for details.

File diff suppressed because it is too large Load Diff

855
fields.yml Normal file
View File

@ -0,0 +1,855 @@
- key: beat
title: Beat
description: >
Contains common beat fields available in all event types.
fields:
- name: beat.name
description: >
The name of the Beat sending the log messages. If the Beat name is
set in the configuration file, then that value is used. If it is not
set, the hostname is used. To set the Beat name, use the `name`
option in the configuration file.
- name: beat.hostname
description: >
The hostname as returned by the operating system on which the Beat is
running.
- name: beat.timezone
description: >
The timezone as returned by the operating system on which the Beat is
running.
- name: beat.version
description: >
The version of the beat that generated this event.
- name: "@timestamp"
type: date
required: true
format: date
example: August 26th 2016, 12:35:53.332
description: >
The timestamp when the event log record was generated.
- name: tags
description: >
Arbitrary tags that can be set per Beat and per transaction
type.
- name: fields
type: object
object_type: keyword
description: >
Contains user configurable fields.
- name: error
type: group
description: >
Error fields containing additional info in case of errors.
fields:
- name: message
type: text
description: >
Error message.
- name: code
type: long
description: >
Error code.
- name: type
type: keyword
description: >
Error type.
- key: cloud
title: Cloud provider metadata
description: >
Metadata from cloud providers added by the add_cloud_metadata processor.
fields:
- name: meta.cloud.provider
example: ec2
description: >
Name of the cloud provider. Possible values are ec2, gce, or digitalocean.
- name: meta.cloud.instance_id
description: >
Instance ID of the host machine.
- name: meta.cloud.instance_name
description: >
Instance name of the host machine.
- name: meta.cloud.machine_type
example: t2.medium
description: >
Machine type of the host machine.
- name: meta.cloud.availability_zone
example: us-east-1c
description: >
Availability zone in which this host is running.
- name: meta.cloud.project_id
example: project-x
description: >
Name of the project in Google Cloud.
- name: meta.cloud.region
description: >
Region in which this host is running.
- key: docker
title: Docker
description: >
beta[]
Docker stats collected from Docker.
short_config: false
anchor: docker-processor
fields:
- name: docker
type: group
fields:
- name: container.id
type: keyword
description: >
Unique container id.
- name: container.image
type: keyword
description: >
Name of the image the container was built on.
- name: container.name
type: keyword
description: >
Container name.
- name: container.labels
type: object
object_type: keyword
description: >
Image labels.
- key: kubernetes
title: Kubernetes
description: >
beta[]
Kubernetes metadata added by the kubernetes processor
short_config: false
anchor: kubernetes-processor
fields:
- name: kubernetes
type: group
fields:
- name: pod.name
type: keyword
description: >
Kubernetes pod name
- name: namespace
type: keyword
description: >
Kubernetes namespace
- name: labels
type: object
description: >
Kubernetes labels map
- name: annotations
type: object
description: >
Kubernetes annotations map
- name: container.name
type: keyword
description: >
Kubernetes container name
- name: container.image
type: keyword
description: >
Kubernetes container image
- key: icingabeat
title: icingabeat
description: Data received from the Icinga 2 API
fields:
- name: timestamp
type: date
description: >
Timestamp of event occurrence
- name: type
type: keyword
description: >
Type of the document
- name: host
type: keyword
description: >
Host that triggered the event
- name: service
type: keyword
description: >
Service that triggered the event
- name: state
type: integer
description: >
State of the check
- name: state_type
type: integer
description: >
State type of the check
- name: author
type: keyword
description: >
Author of a message
- name: notification_type
type: keyword
description: >
Type of notification
- name: text
type: text
description: >
Text of a message
- name: users
type: keyword
description: >
Affected users of a notification
- name: acknowledgement_type
type: integer
description: >
Type of an acknowledgement
- name: expiry
type: date
description: >
Expiry of an acknowledgement
- name: notify
type: keyword
description: >
If has been sent out
- name: check_result.active
type: boolean
description: >
If check was active or passive
- name: check_result.check_source
type: keyword
description: >
Icinga instance that scheduled the check
- name: check_result.command
type: text
description: >
Command that was executed
- name: check_result.execution_end
type: date
description: >
Time when execution of check ended
- name: check_result.execution_start
type: date
description: >
Time when execution of check started
- name: check_result.exit_status
type: integer
description: >
Exit status
- name: check_result.output
type: text
description: >
Output of check
- name: check_result.performance_data
type: text
description: >
Performance data in text format
- name: check_result.schedule_end
type: date
description: >
Time when scheduling of the check ended
- name: check_result.schedule_start
type: date
description: >
Time when check was scheduled
- name: check_result.state
type: integer
description: >
State of the check
- name: check_result.type
type: keyword
description: >
Type of this event
- name: check_result.vars_after.attempt
type: integer
description: >
Check attempt after check execution
- name: check_result.vars_after.reachable
type: boolean
description: >
Reachable state after check execution
- name: check_result.vars_after.state
type: integer
description: >
State of the check after execution
- name: check_result.vars_after.state_type
type: integer
description: >
State type after execution
- name: check_result.vars_before.attempt
type: integer
description: >
Check attempt before check execution
- name: check_result.vars_before.reachable
type: boolean
description: >
Reachable state before check execution
- name: check_result.vars_before.state
type: integer
description: >
Check state before check execution
- name: check_result.vars_before.state_type
type: integer
description: >
State type before check execution
- name: comment.__name
type: text
description: >
Unique identifier of a comment
- name: comment.author
type: keyword
description: >
Author of a comment
- name: comment.entry_time
type: date
description: >
Entry time of a comment
- name: comment.entry_type
type: integer
description: >
Entry type of a comment
- name: comment.expire_time
type: date
description: >
Expire time of a comment
- name: comment.host_name
type: keyword
description: >
Host name of a comment
- name: comment.legacy_id
type: integer
description: >
Legacy ID of a comment
- name: comment.name
type: keyword
description: >
Identifier of a comment
- name: comment.package
type: keyword
description: >
Config package of a comment
- name: comment.service_name
type: keyword
description: >
Service name of a comment
- name: comment.templates
type: text
description: >
Templates used by a comment
- name: comment.text
type: text
description: >
Text of a comment
- name: comment.type
type: keyword
description: >
Comment type
- name: comment.version
type: keyword
description: >
Config version of comment object
- name: comment.zone
type: keyword
description: >
Zone where comment was generated
- name: downtime.__name
type: text
description: >
Unique identifier of a downtime
- name: downtime.author
type: keyword
description: >
Author of a downtime
- name: downtime.comment
type: text
description: >
Text of a downtime
- name: downtime.config_owner
type: text
description: >
Config owner
- name: downtime.duration
type: integer
description: >
Duration of a downtime
- name: downtime.end_time
type: date
description: >
Timestamp of downtime end
- name: downtime.entry_time
type: date
description: >
Timestamp when downtime was created
- name: downtime.fixed
type: boolean
description: >
If downtime is fixed or flexible
- name: downtime.host_name
type: keyword
description: >
Hostname of a downtime
- name: downtime.legacy_id
type: integer
description: >
The integer ID of a downtime
- name: downtime.name
type: keyword
description: >
Downtime config identifier
- name: downtime.package
type: keyword
description: >
Configuration package of downtime
- name: downtime.scheduled_by
type: text
description: >
By whom downtime was scheduled
- name: downtime.service_name
type: keyword
description: >
Service name of a downtime
- name: downtime.start_time
type: date
description: >
Timestamp when downtime starts
- name: downtime.templates
type: text
description: >
Templates used by this downtime
- name: downtime.trigger_time
type: date
description: >
Timestamp when downtime was triggered
- name: downtime.triggered_by
type: text
description: >
By whom downtime was triggered
- name: downtime.triggers
type: text
description: >
Downtime triggers
- name: downtime.type
type: keyword
description: >
Downtime type
- name: downtime.version
type: keyword
description: >
Config version of downtime
- name: downtime.was_cancelled
type: boolean
description: >
If downtime was cancelled
- name: downtime.zone
type: keyword
description: >
Zone of downtime
- name: status.active_host_checks
type: integer
description: >
Active host checks
- name: status.active_host_checks_15min
type: integer
description: >
Active host checks in the last 15 minutes
- name: status.active_host_checks_1min
type: integer
description: >
Acitve host checks in the last minute
- name: status.active_host_checks_5min
type: integer
description: >
Active host checks in the last 5 minutes
- name: status.active_service_checks
type: integer
description: >
Active service checks
- name: status.active_service_checks_15min
type: integer
description: >
Active service checks in the last 15 minutes
- name: status.active_service_checks_1min
type: integer
description: >
Active service checks in the last minute
- name: status.active_service_checks_5min
type: integer
description: >
Active service checks in the last 5 minutes
- name: status.api.identity
type: keyword
description: >
API identity
- name: status.api.num_conn_endpoints
type: integer
description: >
Number of connected endpoints
- name: status.api.num_endpoints
type: integer
description: >
Total number of endpoints
- name: status.api.num_not_conn_endpoints
type: integer
description: >
Number of not connected endpoints
- name: status.api.zones.demo.client_log_lag
type: integer
description: >
Lag of the replaylog
- name: status.api.zones.demo.connected
type: boolean
description: >
Zone connected
- name: status.api.zones.demo.endpoints
type: text
description: >
Endpoint names
- name: status.api.zones.demo.parent_zone
type: keyword
description: >
Parent zone
- name: status.avg_execution_time
type: integer
description: >
Average execution time of checks
- name: status.avg_latency
type: integer
description: >
Average latency time
- name: status.checkercomponent.checker.idle
type: integer
description: >
Idle checks
- name: status.checkercomponent.checker.pending
type: integer
description: >
Pending checks
- name: status.filelogger.main-log
type: integer
description: >
Mainlog enabled
- name: status.icingaapplication.app.enable_event_handlers
type: boolean
description: >
Event handlers enabled
- name: status.icingaapplication.app.enable_flapping
type: boolean
description: >
Flapping detection enabled
- name: status.icingaapplication.app.enable_host_checks
type: boolean
description: >
Host checks enabled
- name: status.icingaapplication.app.enable_notifications
type: boolean
description: >
Notifications enabled
- name: status.icingaapplication.app.enable_perfdata
type: boolean
description: >
Perfdata enabled
- name: status.icingaapplication.app.enable_service_checks
type: boolean
description: >
Service checks enabled
- name: status.icingaapplication.app.node_name
type: keyword
description: >
Node name
- name: status.icingaapplication.app.pid
type: integer
description: >
PID
- name: status.icingaapplication.app.program_start
type: integer
description: >
Time when Icinga started
- name: status.icingaapplication.app.version
type: keyword
description: >
Version
- name: status.idomysqlconnection.ido-mysql.connected
type: boolean
description: >
IDO connected
- name: status.idomysqlconnection.ido-mysql.instance_name
type: keyword
description: >
IDO Instance name
- name: status.idomysqlconnection.ido-mysql.query_queue_items
type: integer
description: >
IDO query items in the queue
- name: status.idomysqlconnection.ido-mysql.version
type: keyword
description: >
IDO schema version
- name: status.max_execution_time
type: integer
description: >
Max execution time
- name: status.max_latency
type: integer
description: >
Max latency
- name: status.min_execution_time
type: integer
description: >
Min execution time
- name: status.min_latency
type: integer
description: >
Min latency
- name: status.notificationcomponent.notification
type: integer
description: >
Notification
- name: status.num_hosts_acknowledged
type: integer
description: >
Amount of acknowledged hosts
- name: status.num_hosts_down
type: integer
description: >
Amount of down hosts
- name: status.num_hosts_flapping
type: integer
description: >
Amount of flapping hosts
- name: status.num_hosts_in_downtime
type: integer
description: >
Amount of hosts in downtime
- name: status.num_hosts_pending
type: integer
description: >
Amount of pending hosts
- name: status.num_hosts_unreachable
type: integer
description: >
Amount of unreachable hosts
- name: status.num_hosts_up
type: integer
description: >
Amount of hosts in up state
- name: status.num_services_acknowledged
type: integer
description: >
Amount of acknowledged services
- name: status.num_services_critical
type: integer
description: >
Amount of critical services
- name: status.num_services_flapping
type: integer
description: >
Amount of flapping services
- name: status.num_services_in_downtime
type: integer
description: >
Amount of services in downtime
- name: status.num_services_ok
type: integer
description: >
Amount of services in ok state
- name: status.num_services_pending
type: integer
description: >
Amount of pending services
- name: status.num_services_unknown
type: integer
description: >
Amount of unknown services
- name: status.num_services_unreachable
type: integer
description: >
Amount of unreachable services
- name: status.num_services_warning
type: integer
description: >
Amount of services in warning state
- name: status.passive_host_checks
type: integer
description: >
Amount of passive host checks
- name: status.passive_host_checks_15min
type: integer
description: >
Amount of passive host checks in the last 15 minutes
- name: status.passive_host_checks_1min
type: integer
description: >
Amount of passive host checks in the last minute
- name: status.passive_host_checks_5min
type: integer
description: >
Amount of passive host checks in the last 5 minutes
- name: status.passive_service_checks
type: integer
description: >
Amount of passive service checks
- name: status.passive_service_checks_15min
type: integer
description: >
Amount of passive service checks in the last 15 minutes
- name: status.passive_service_checks_1min
type: integer
description: >
Amount of passive service checks in the last minute
- name: status.passive_service_checks_5min
type: integer
description: >
Amount of passive service checks in the last 5 minutes
- name: status.uptime
type: integer
description: >
Uptime

View File

@ -223,6 +223,14 @@ output.elasticsearch:
# Path to the Elasticsearch 2.x version of the template file.
#template.versions.2x.path: "${path.config}/icingabeat.template-es2x.json"
# If set to true, icingabeat checks the Elasticsearch version at connect time, and if it
# is 6.x, it loads the file specified by the template.versions.6x.path setting. The
# default is true.
#template.versions.6x.enabled: true
# Path to the Elasticsearch 6.x version of the template file.
#template.versions.6x.path: "${path.config}/icingabeat.template-es6x.json"
# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true
@ -255,6 +263,10 @@ output.elasticsearch:
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#----------------------------- Logstash output ---------------------------------
#output.logstash:
@ -277,6 +289,11 @@ output.elasticsearch:
# new batches.
#pipelining: 0
# If enabled only a subset of events in a batch of events is transferred per
# transaction. The number of events to be sent increases up to `bulk_max_size`
# if no error is encountered.
#slow_start: false
# Optional index name. The default index name is set to name of the beat
# in all lowercase.
#index: 'icingabeat'
@ -319,6 +336,10 @@ output.elasticsearch:
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Kafka output ----------------------------------
#output.kafka:
# Boolean flag to enable or disable the output module.
@ -454,6 +475,10 @@ output.elasticsearch:
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Redis output ----------------------------------
#output.redis:
# Boolean flag to enable or disable the output module.
@ -551,6 +576,10 @@ output.elasticsearch:
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- File output -----------------------------------
#output.file:
@ -693,3 +722,6 @@ logging.files:
# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7
# The permissions mask to apply when rotating log files. The default value is 0600.
# Must be a valid Unix-style file permissions mask expressed in octal notation.
#permissions: 0600

851
icingabeat.reference.yml Normal file
View File

@ -0,0 +1,851 @@
################### Icingabeat Configuration Example #########################
############################# Icingabeat ######################################
icingabeat:
# Defines the Icinga API endpoint
host: "localhost"
# Defines the port of the API endpoint
port: 5665
# A user with sufficient permissions
user: "icinga"
# Password of the user
password: "icinga"
# Configure SSL verification. If `false` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `true`.
ssl.verify: true
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
########################### Icingabeat Eventstream ##########################
#
# Icingabeat supports capturing of an evenstream and periodical polling of the
# Icinga status data.
# Decide which events to receive from the event stream.
# The following event stream types are available:
#
# * CheckResult
# * StateChange
# * Notification
# * AcknowledgementSet
# * AcknowledgementCleared
# * CommentAdded
# * CommentRemoved
# * DowntimeAdded
# * DowntimeRemoved
# * DowntimeStarted
# * DowntimeTriggered
#
# To disable eventstream, leave the types empty or comment out the option
eventstream.types:
- CheckResult
- StateChange
# Event streams can be filtered by attributes using the prefix 'event.'
#
# Example for the CheckResult type with the exit_code set to 2:
# filter: "event.check_result.exit_status==2"
#
# Example for the CheckResult type with the service matching the string
# pattern "mysql*":
# filter: 'match("mysql*", event.service)'
#
# To disable filtering set an empty string or comment out the filter option
eventstream.filter: ""
# Defines how fast to reconnect to the API on connection loss
eventstream.retry_interval: 10s
########################### Icingabeat Statuspoller #########################
#
# Icingabeat can collect status information about Icinga 2 periodically. Set
# an interval at which the status API should be called. Set to 0 to disable
# polling.
statuspoller.interval: 60s
#================================ General ======================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:
# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
#fields_under_root: false
# Internal queue configuration for buffering events to be published.
#queue:
# Queue type by name (default 'mem')
# The memory queue will present all available events (up to the outputs
# bulk_max_size) to the output, the moment the output is ready to server
# another batch of events.
#mem:
# Max number of events the queue can buffer.
#events: 4096
# Hints the minimum number of events stored in the queue,
# before providing a batch of events to the outputs.
# A value of 0 (the default) ensures events are immediately available
# to be sent to the outputs.
#flush.min_events: 2048
# Maximum duration after which events are available to the outputs,
# if the number of events stored in the queue is < min_flush_events.
#flush.timeout: 1s
# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:
#================================ Processors ===================================
# Processors are used to reduce the number of fields in the exported event or to
# enhance the event with external metadata. This section defines a list of
# processors that are applied one by one and the first one receives the initial
# event:
#
# event -> filter1 -> event1 -> filter2 ->event2 ...
#
# The supported processors are drop_fields, drop_event, include_fields, and
# add_cloud_metadata.
#
# For example, you can use the following processors to keep the fields that
# contain CPU load percentages, but remove the fields that contain CPU ticks
# values:
#
#processors:
#- include_fields:
# fields: ["cpu"]
#- drop_fields:
# fields: ["cpu.user", "cpu.system"]
#
# The following example drops the events that have the HTTP response code 200:
#
#processors:
#- drop_event:
# when:
# equals:
# http.code: 200
#
# The following example enriches each event with metadata from the cloud
# provider about the host machine. It works on EC2, GCE, DigitalOcean,
# Tencent Cloud, and Alibaba Cloud.
#
#processors:
#- add_cloud_metadata: ~
#
# The following example enriches each event with the machine's local time zone
# offset from UTC.
#
#processors:
#- add_locale:
# format: offset
#
# The following example enriches each event with docker metadata, it matches
# given fields to an existing container id and adds info from that container:
#
#processors:
#- add_docker_metadata:
# host: "unix:///var/run/docker.sock"
# match_fields: ["system.process.cgroup.id"]
# # To connect to Docker over TLS you must specify a client and CA certificate.
# #ssl:
# # certificate_authority: "/etc/pki/root/ca.pem"
# # certificate: "/etc/pki/client/cert.pem"
# # key: "/etc/pki/client/cert.key"
#
# The following example enriches each event with docker metadata, it matches
# container id from log path available in `source` field (by default it expects
# it to be /var/lib/docker/containers/*/*.log).
#
#processors:
#- add_docker_metadata: ~
#============================= Elastic Cloud ==================================
# These settings simplify using icingabeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs ======================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output -------------------------------
output.elasticsearch:
# Boolean flag to enable or disable the output module.
#enabled: true
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
hosts: ["localhost:9200"]
# Set gzip compression level.
#compression_level: 0
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
# Dictionary of HTTP parameters to pass within the url with index operations.
#parameters:
#param1: value1
#param2: value2
# Number of workers per Elasticsearch host.
#worker: 1
# Optional index name. The default is "icingabeat" plus date
# and generates [icingabeat-]YYYY.MM.DD keys.
# In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
#index: "icingabeat-%{[beat.version]}-%{+yyyy.MM.dd}"
# Optional ingest node pipeline. By default no pipeline will be used.
#pipeline: ""
# Optional HTTP Path
#path: "/elasticsearch"
# Custom HTTP headers to add to each request
#headers:
# X-My-Header: Contents of the header
# Proxy server url
#proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90
# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#----------------------------- Logstash output ---------------------------------
#output.logstash:
# Boolean flag to enable or disable the output module.
#enabled: true
# The Logstash hosts
#hosts: ["localhost:5044"]
# Number of workers per Logstash host.
#worker: 1
# Set gzip compression level.
#compression_level: 3
# Optional maximum time to live for a connection to Logstash, after which the
# connection will be re-established. A value of `0s` (the default) will
# disable this feature.
#
# Not yet supported for async connections (i.e. with the "pipelining" option set)
#ttl: 30s
# Optional load balance the events between the Logstash hosts. Default is false.
#loadbalance: false
# Number of batches to be sent asynchronously to logstash while processing
# new batches.
#pipelining: 5
# If enabled only a subset of events in a batch of events is transferred per
# transaction. The number of events to be sent increases up to `bulk_max_size`
# if no error is encountered.
#slow_start: false
# Optional index name. The default index name is set to icingabeat
# in all lowercase.
#index: 'icingabeat'
# SOCKS5 proxy server URL
#proxy_url: socks5://user:password@socks5-server:2233
# Resolve names locally when using a proxy server. Defaults to false.
#proxy_use_local_resolver: false
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Kafka output ----------------------------------
#output.kafka:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Kafka broker addresses from where to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published
# to.
#hosts: ["localhost:9092"]
# The Kafka topic used for produced events. The setting can be a format string
# using any event field. To set the topic from document type use `%{[type]}`.
#topic: beats
# The Kafka event key setting. Use format string to create unique event key.
# By default no event key will be generated.
#key: ''
# The Kafka event partitioning strategy. Default hashing strategy is `hash`
# using the `output.kafka.key` setting or randomly distributes events if
# `output.kafka.key` is not configured.
#partition.hash:
# If enabled, events will only be published to partitions with reachable
# leaders. Default is false.
#reachable_only: false
# Configure alternative event field names used to compute the hash value.
# If empty `output.kafka.key` setting will be used.
# Default value is empty list.
#hash: []
# Authentication details. Password is required if username is set.
#username: ''
#password: ''
# Kafka version icingabeat is assumed to run against. Defaults to the oldest
# supported stable version (currently version 0.8.2.0)
#version: 0.8.2
# Metadata update configuration. Metadata do contain leader information
# deciding which broker to use when publishing.
#metadata:
# Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries.
#retry.max: 3
# Waiting time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms
# Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m
# The number of concurrent load-balanced Kafka output workers.
#worker: 1
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Kafka request. The default
# is 2048.
#bulk_max_size: 2048
# The number of seconds to wait for responses from the Kafka brokers before
# timing out. The default is 30s.
#timeout: 30s
# The maximum duration a broker will wait for number of required ACKs. The
# default is 10s.
#broker_timeout: 10s
# The number of messages buffered for each Kafka broker. The default is 256.
#channel_buffer_size: 256
# The keep-alive period for an active network connection. If 0s, keep-alives
# are disabled. The default is 0 seconds.
#keep_alive: 0
# Sets the output compression codec. Must be one of none, snappy and gzip. The
# default is gzip.
#compression: gzip
# The maximum permitted size of JSON-encoded messages. Bigger messages will be
# dropped. The default value is 1000000 (bytes). This value should be equal to
# or less than the broker's message.max.bytes.
#max_message_bytes: 1000000
# The ACK reliability level required from broker. 0=no response, 1=wait for
# local commit, -1=wait for all replicas to commit. The default is 1. Note:
# If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
# on error.
#required_acks: 1
# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Redis output ----------------------------------
#output.redis:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Redis servers to connect to. If load balancing is enabled, the
# events are distributed to the servers in the list. If one server becomes
# unreachable, the events are distributed to the reachable servers only.
#hosts: ["localhost:6379"]
# The Redis port to use if hosts does not contain a port number. The default
# is 6379.
#port: 6379
# The name of the Redis list or channel the events are published to. The
# default is icingabeat.
#key: icingabeat
# The password to authenticate with. The default is no authentication.
#password:
# The Redis database number where the events are published. The default is 0.
#db: 0
# The Redis data type to use for publishing events. If the data type is list,
# the Redis RPUSH command is used. If the data type is channel, the Redis
# PUBLISH command is used. The default value is list.
#datatype: list
# The number of workers to use for each host configured to publish events to
# Redis. Use this setting along with the loadbalance option. For example, if
# you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
# host).
#worker: 1
# If set to true and multiple hosts or workers are configured, the output
# plugin load balances published events onto all Redis hosts. If set to false,
# the output plugin sends all events to only one host (determined at random)
# and will switch to another host if the currently selected one becomes
# unreachable. The default value is true.
#loadbalance: true
# The Redis connection timeout in seconds. The default is 5 seconds.
#timeout: 5s
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Redis request or pipeline.
# The default is 2048.
#bulk_max_size: 2048
# The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The
# value must be a URL with a scheme of socks5://.
#proxy_url:
# This option determines whether Redis hostnames are resolved locally when
# using a proxy. The default value is false, which means that name resolution
# occurs on the proxy server.
#proxy_use_local_resolver: false
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- File output -----------------------------------
#output.file:
# Boolean flag to enable or disable the output module.
#enabled: true
# Path to the directory where to save the generated files. The option is
# mandatory.
#path: "/tmp/icingabeat"
# Name of the generated files. The default is `icingabeat` and it generates
# files: `icingabeat`, `icingabeat.1`, `icingabeat.2`, etc.
#filename: icingabeat
# Maximum size in kilobytes of each file. When this size is reached, and on
# every icingabeat restart, the files are rotated. The default value is 10240
# kB.
#rotate_every_kb: 10000
# Maximum number of files under path. When this number of files is reached,
# the oldest file is deleted and the rest are shifted from last to first. The
# default is 7 files.
#number_of_files: 7
# Permissions to use for file creation. The default is 0600.
#permissions: 0600
#----------------------------- Console output ---------------------------------
#output.console:
# Boolean flag to enable or disable the output module.
#enabled: true
# Pretty print json event
#pretty: false
#================================= Paths ======================================
# The home path for the icingabeat installation. This is the default base path
# for all other path settings and for miscellaneous files that come with the
# distribution (for example, the sample dashboards).
# If not set by a CLI flag or in the configuration file, the default for the
# home path is the location of the binary.
#path.home:
# The configuration path for the icingabeat installation. This is the default
# base path for configuration files, including the main YAML configuration file
# and the Elasticsearch template file. If not set by a CLI flag or in the
# configuration file, the default for the configuration path is the home path.
#path.config: ${path.home}
# The data path for the icingabeat installation. This is the default base path
# for all the files in which icingabeat needs to store its data. If not set by a
# CLI flag or in the configuration file, the default for the data path is a data
# subdirectory inside the home path.
#path.data: ${path.home}/data
# The logs path for a icingabeat installation. This is the default location for
# the Beat's log files. If not set by a CLI flag or in the configuration file,
# the default for the logs path is a logs subdirectory inside the home path.
#path.logs: ${path.home}/logs
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards are disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false
# The directory from where to read the dashboards. The default is the `kibana`
# folder in the home path.
#setup.dashboards.directory: ${path.home}/kibana
# The URL from where to download the dashboards archive. It is used instead of
# the directory if it has a value.
#setup.dashboards.url:
# The file archive (zip file) from where to read the dashboards. It is used instead
# of the directory when it has a value.
#setup.dashboards.file:
# In case the archive contains the dashboards from multiple Beats, this lets you
# select which one to load. You can load all the dashboards in the archive by
# setting this to the empty string.
#setup.dashboards.beat: icingabeat
# The name of the Kibana index to use for setting the configuration. Default is ".kibana"
#setup.dashboards.kibana_index: .kibana
# The Elasticsearch index name. This overwrites the index name defined in the
# dashboards and index pattern. Example: testbeat-*
#setup.dashboards.index:
# Always use the Kibana API for loading the dashboards instead of autodetecting
# how to install the dashboards by first querying Elasticsearch.
#setup.dashboards.always_kibana: false
#============================== Template =====================================
# A template is used to set the mapping in Elasticsearch
# By default template loading is enabled and the template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones.
# Set to false to disable template loading.
#setup.template.enabled: true
# Template name. By default the template name is "icingabeat-%{[beat.version]}"
# The template name and pattern has to be set in case the elasticsearch index pattern is modified.
#setup.template.name: "icingabeat-%{[beat.version]}"
# Template pattern. By default the template pattern is "-%{[beat.version]}-*" to apply to the default index settings.
# The first part is the version of the beat and then -* is used to match all daily indices.
# The template name and pattern has to be set in case the elasticsearch index pattern is modified.
#setup.template.pattern: "icingabeat-%{[beat.version]}-*"
# Path to fields.yml file to generate the template
#setup.template.fields: "${path.config}/fields.yml"
# Overwrite existing template
#setup.template.overwrite: false
# Elasticsearch template settings
setup.template.settings:
# A dictionary of settings to place into the settings.index dictionary
# of the Elasticsearch template. For more details, please check
# https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html
#index:
#number_of_shards: 1
#codec: best_compression
#number_of_routing_shards: 30
# A dictionary of settings for the _source field. For more details, please check
# https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html
#_source:
#enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
# Optional HTTP Path
#path: ""
# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
#================================ Logging ======================================
# There are three options for the log output: syslog, file, stderr.
# Under Windows systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: info
# Enable debug output for selected components. To enable all selectors use ["*"]
# Other available selectors are "beat", "publish", "service"
# Multiple selectors can be chained.
#logging.selectors: [ ]
# Send all logging output to syslog. The default is false.
#logging.to_syslog: true
# If enabled, icingabeat periodically logs its internal metrics that have changed
# in the last period. For each metric that changed, the delta from the value at
# the beginning of the period is logged. Also, the total values for
# all non-zero internal metrics are logged on shutdown. The default is true.
#logging.metrics.enabled: true
# The period after which to log the internal metrics. The default is 30s.
#logging.metrics.period: 30s
# Logging to rotating files. Set logging.to_files to false to disable logging to
# files.
logging.to_files: true
logging.files:
# Configure the path where the logs are written. The default is the logs directory
# under the home path (the binary location).
#path: /var/log/icingabeat
# The name of the files where the logs are written to.
#name: icingabeat
# Configure log file size limit. If limit is reached, log file will be
# automatically rotated
#rotateeverybytes: 10485760 # = 10MB
# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7
# The permissions mask to apply when rotating log files. The default value is 0600.
# Must be a valid Unix-style file permissions mask expressed in octal notation.
#permissions: 0600
# Set to true to log messages in json format.
#logging.json: false

View File

@ -1,702 +0,0 @@
{
"mappings": {
"_default_": {
"_all": {
"norms": {
"enabled": false
}
},
"_meta": {
"version": "1.1.0"
},
"date_detection": false,
"dynamic_templates": [
{
"strings_as_keyword": {
"mapping": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"@timestamp": {
"type": "date"
},
"acknowledgement_type": {
"type": "long"
},
"author": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"beat": {
"properties": {
"hostname": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"name": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"version": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
}
}
},
"check_result": {
"properties": {
"active": {
"type": "boolean"
},
"check_source": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"command": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"execution_end": {
"type": "date"
},
"execution_start": {
"type": "date"
},
"exit_status": {
"type": "long"
},
"output": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"performance_data": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"schedule_end": {
"type": "date"
},
"schedule_start": {
"type": "date"
},
"state": {
"type": "long"
},
"type": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"vars_after": {
"properties": {
"attempt": {
"type": "long"
},
"reachable": {
"type": "boolean"
},
"state": {
"type": "long"
},
"state_type": {
"type": "long"
}
}
},
"vars_before": {
"properties": {
"attempt": {
"type": "long"
},
"reachable": {
"type": "boolean"
},
"state": {
"type": "long"
},
"state_type": {
"type": "long"
}
}
}
}
},
"comment": {
"properties": {
"__name": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"author": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"entry_time": {
"type": "date"
},
"entry_type": {
"type": "long"
},
"expire_time": {
"type": "date"
},
"host_name": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"legacy_id": {
"type": "long"
},
"name": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"package": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"service_name": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"templates": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"text": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"type": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"version": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"zone": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
}
}
},
"downtime": {
"properties": {
"__name": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"author": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"comment": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"config_owner": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"duration": {
"type": "long"
},
"end_time": {
"type": "date"
},
"entry_time": {
"type": "date"
},
"fixed": {
"type": "boolean"
},
"host_name": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"legacy_id": {
"type": "long"
},
"name": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"package": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"scheduled_by": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"service_name": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"start_time": {
"type": "date"
},
"templates": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"trigger_time": {
"type": "date"
},
"triggered_by": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"triggers": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"type": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"version": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"was_cancelled": {
"type": "boolean"
},
"zone": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
}
}
},
"expiry": {
"type": "date"
},
"fields": {
"properties": {}
},
"host": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"meta": {
"properties": {
"cloud": {
"properties": {
"availability_zone": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"instance_id": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"machine_type": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"project_id": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"provider": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"region": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
}
}
}
}
},
"notification_type": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"notify": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"service": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"state": {
"type": "long"
},
"state_type": {
"type": "long"
},
"status": {
"properties": {
"active_host_checks": {
"type": "long"
},
"active_host_checks_15min": {
"type": "long"
},
"active_host_checks_1min": {
"type": "long"
},
"active_host_checks_5min": {
"type": "long"
},
"active_service_checks": {
"type": "long"
},
"active_service_checks_15min": {
"type": "long"
},
"active_service_checks_1min": {
"type": "long"
},
"active_service_checks_5min": {
"type": "long"
},
"api": {
"properties": {
"identity": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"num_conn_endpoints": {
"type": "long"
},
"num_endpoints": {
"type": "long"
},
"num_not_conn_endpoints": {
"type": "long"
},
"zones": {
"properties": {
"demo": {
"properties": {
"client_log_lag": {
"type": "long"
},
"connected": {
"type": "boolean"
},
"endpoints": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"parent_zone": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
}
}
}
}
}
}
},
"avg_execution_time": {
"type": "long"
},
"avg_latency": {
"type": "long"
},
"checkercomponent": {
"properties": {
"checker": {
"properties": {
"idle": {
"type": "long"
},
"pending": {
"type": "long"
}
}
}
}
},
"filelogger": {
"properties": {
"main-log": {
"type": "long"
}
}
},
"icingaapplication": {
"properties": {
"app": {
"properties": {
"enable_event_handlers": {
"type": "boolean"
},
"enable_flapping": {
"type": "boolean"
},
"enable_host_checks": {
"type": "boolean"
},
"enable_notifications": {
"type": "boolean"
},
"enable_perfdata": {
"type": "boolean"
},
"enable_service_checks": {
"type": "boolean"
},
"node_name": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"pid": {
"type": "long"
},
"program_start": {
"type": "long"
},
"version": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
}
}
}
}
},
"idomysqlconnection": {
"properties": {
"ido-mysql": {
"properties": {
"connected": {
"type": "boolean"
},
"instance_name": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"query_queue_items": {
"type": "long"
},
"version": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
}
}
}
}
},
"max_execution_time": {
"type": "long"
},
"max_latency": {
"type": "long"
},
"min_execution_time": {
"type": "long"
},
"min_latency": {
"type": "long"
},
"notificationcomponent": {
"properties": {
"notification": {
"type": "long"
}
}
},
"num_hosts_acknowledged": {
"type": "long"
},
"num_hosts_down": {
"type": "long"
},
"num_hosts_flapping": {
"type": "long"
},
"num_hosts_in_downtime": {
"type": "long"
},
"num_hosts_pending": {
"type": "long"
},
"num_hosts_unreachable": {
"type": "long"
},
"num_hosts_up": {
"type": "long"
},
"num_services_acknowledged": {
"type": "long"
},
"num_services_critical": {
"type": "long"
},
"num_services_flapping": {
"type": "long"
},
"num_services_in_downtime": {
"type": "long"
},
"num_services_ok": {
"type": "long"
},
"num_services_pending": {
"type": "long"
},
"num_services_unknown": {
"type": "long"
},
"num_services_unreachable": {
"type": "long"
},
"num_services_warning": {
"type": "long"
},
"passive_host_checks": {
"type": "long"
},
"passive_host_checks_15min": {
"type": "long"
},
"passive_host_checks_1min": {
"type": "long"
},
"passive_host_checks_5min": {
"type": "long"
},
"passive_service_checks": {
"type": "long"
},
"passive_service_checks_15min": {
"type": "long"
},
"passive_service_checks_1min": {
"type": "long"
},
"passive_service_checks_5min": {
"type": "long"
},
"uptime": {
"type": "long"
}
}
},
"tags": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"text": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
},
"timestamp": {
"type": "date"
},
"type": {
"ignore_above": 1024,
"index": "not_analyzed",
"type": "string"
},
"users": {
"index": "analyzed",
"norms": {
"enabled": false
},
"type": "string"
}
}
}
},
"order": 0,
"settings": {
"index.refresh_interval": "5s"
},
"template": "icingabeat-*"
}

View File

@ -1,612 +0,0 @@
{
"mappings": {
"_default_": {
"_all": {
"norms": false
},
"_meta": {
"version": "1.1.0"
},
"date_detection": false,
"dynamic_templates": [
{
"strings_as_keyword": {
"mapping": {
"ignore_above": 1024,
"type": "keyword"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"@timestamp": {
"type": "date"
},
"acknowledgement_type": {
"type": "long"
},
"author": {
"ignore_above": 1024,
"type": "keyword"
},
"beat": {
"properties": {
"hostname": {
"ignore_above": 1024,
"type": "keyword"
},
"name": {
"ignore_above": 1024,
"type": "keyword"
},
"version": {
"ignore_above": 1024,
"type": "keyword"
}
}
},
"check_result": {
"properties": {
"active": {
"type": "boolean"
},
"check_source": {
"ignore_above": 1024,
"type": "keyword"
},
"command": {
"norms": false,
"type": "text"
},
"execution_end": {
"type": "date"
},
"execution_start": {
"type": "date"
},
"exit_status": {
"type": "long"
},
"output": {
"norms": false,
"type": "text"
},
"performance_data": {
"norms": false,
"type": "text"
},
"schedule_end": {
"type": "date"
},
"schedule_start": {
"type": "date"
},
"state": {
"type": "long"
},
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"vars_after": {
"properties": {
"attempt": {
"type": "long"
},
"reachable": {
"type": "boolean"
},
"state": {
"type": "long"
},
"state_type": {
"type": "long"
}
}
},
"vars_before": {
"properties": {
"attempt": {
"type": "long"
},
"reachable": {
"type": "boolean"
},
"state": {
"type": "long"
},
"state_type": {
"type": "long"
}
}
}
}
},
"comment": {
"properties": {
"__name": {
"norms": false,
"type": "text"
},
"author": {
"ignore_above": 1024,
"type": "keyword"
},
"entry_time": {
"type": "date"
},
"entry_type": {
"type": "long"
},
"expire_time": {
"type": "date"
},
"host_name": {
"ignore_above": 1024,
"type": "keyword"
},
"legacy_id": {
"type": "long"
},
"name": {
"ignore_above": 1024,
"type": "keyword"
},
"package": {
"ignore_above": 1024,
"type": "keyword"
},
"service_name": {
"ignore_above": 1024,
"type": "keyword"
},
"templates": {
"norms": false,
"type": "text"
},
"text": {
"norms": false,
"type": "text"
},
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"version": {
"ignore_above": 1024,
"type": "keyword"
},
"zone": {
"ignore_above": 1024,
"type": "keyword"
}
}
},
"downtime": {
"properties": {
"__name": {
"norms": false,
"type": "text"
},
"author": {
"ignore_above": 1024,
"type": "keyword"
},
"comment": {
"norms": false,
"type": "text"
},
"config_owner": {
"norms": false,
"type": "text"
},
"duration": {
"type": "long"
},
"end_time": {
"type": "date"
},
"entry_time": {
"type": "date"
},
"fixed": {
"type": "boolean"
},
"host_name": {
"ignore_above": 1024,
"type": "keyword"
},
"legacy_id": {
"type": "long"
},
"name": {
"ignore_above": 1024,
"type": "keyword"
},
"package": {
"ignore_above": 1024,
"type": "keyword"
},
"scheduled_by": {
"norms": false,
"type": "text"
},
"service_name": {
"ignore_above": 1024,
"type": "keyword"
},
"start_time": {
"type": "date"
},
"templates": {
"norms": false,
"type": "text"
},
"trigger_time": {
"type": "date"
},
"triggered_by": {
"norms": false,
"type": "text"
},
"triggers": {
"norms": false,
"type": "text"
},
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"version": {
"ignore_above": 1024,
"type": "keyword"
},
"was_cancelled": {
"type": "boolean"
},
"zone": {
"ignore_above": 1024,
"type": "keyword"
}
}
},
"expiry": {
"type": "date"
},
"fields": {
"properties": {}
},
"host": {
"ignore_above": 1024,
"type": "keyword"
},
"meta": {
"properties": {
"cloud": {
"properties": {
"availability_zone": {
"ignore_above": 1024,
"type": "keyword"
},
"instance_id": {
"ignore_above": 1024,
"type": "keyword"
},
"machine_type": {
"ignore_above": 1024,
"type": "keyword"
},
"project_id": {
"ignore_above": 1024,
"type": "keyword"
},
"provider": {
"ignore_above": 1024,
"type": "keyword"
},
"region": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
},
"notification_type": {
"ignore_above": 1024,
"type": "keyword"
},
"notify": {
"ignore_above": 1024,
"type": "keyword"
},
"service": {
"ignore_above": 1024,
"type": "keyword"
},
"state": {
"type": "long"
},
"state_type": {
"type": "long"
},
"status": {
"properties": {
"active_host_checks": {
"type": "long"
},
"active_host_checks_15min": {
"type": "long"
},
"active_host_checks_1min": {
"type": "long"
},
"active_host_checks_5min": {
"type": "long"
},
"active_service_checks": {
"type": "long"
},
"active_service_checks_15min": {
"type": "long"
},
"active_service_checks_1min": {
"type": "long"
},
"active_service_checks_5min": {
"type": "long"
},
"api": {
"properties": {
"identity": {
"ignore_above": 1024,
"type": "keyword"
},
"num_conn_endpoints": {
"type": "long"
},
"num_endpoints": {
"type": "long"
},
"num_not_conn_endpoints": {
"type": "long"
},
"zones": {
"properties": {
"demo": {
"properties": {
"client_log_lag": {
"type": "long"
},
"connected": {
"type": "boolean"
},
"endpoints": {
"norms": false,
"type": "text"
},
"parent_zone": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
}
}
},
"avg_execution_time": {
"type": "long"
},
"avg_latency": {
"type": "long"
},
"checkercomponent": {
"properties": {
"checker": {
"properties": {
"idle": {
"type": "long"
},
"pending": {
"type": "long"
}
}
}
}
},
"filelogger": {
"properties": {
"main-log": {
"type": "long"
}
}
},
"icingaapplication": {
"properties": {
"app": {
"properties": {
"enable_event_handlers": {
"type": "boolean"
},
"enable_flapping": {
"type": "boolean"
},
"enable_host_checks": {
"type": "boolean"
},
"enable_notifications": {
"type": "boolean"
},
"enable_perfdata": {
"type": "boolean"
},
"enable_service_checks": {
"type": "boolean"
},
"node_name": {
"ignore_above": 1024,
"type": "keyword"
},
"pid": {
"type": "long"
},
"program_start": {
"type": "long"
},
"version": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
},
"idomysqlconnection": {
"properties": {
"ido-mysql": {
"properties": {
"connected": {
"type": "boolean"
},
"instance_name": {
"ignore_above": 1024,
"type": "keyword"
},
"query_queue_items": {
"type": "long"
},
"version": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
},
"max_execution_time": {
"type": "long"
},
"max_latency": {
"type": "long"
},
"min_execution_time": {
"type": "long"
},
"min_latency": {
"type": "long"
},
"notificationcomponent": {
"properties": {
"notification": {
"type": "long"
}
}
},
"num_hosts_acknowledged": {
"type": "long"
},
"num_hosts_down": {
"type": "long"
},
"num_hosts_flapping": {
"type": "long"
},
"num_hosts_in_downtime": {
"type": "long"
},
"num_hosts_pending": {
"type": "long"
},
"num_hosts_unreachable": {
"type": "long"
},
"num_hosts_up": {
"type": "long"
},
"num_services_acknowledged": {
"type": "long"
},
"num_services_critical": {
"type": "long"
},
"num_services_flapping": {
"type": "long"
},
"num_services_in_downtime": {
"type": "long"
},
"num_services_ok": {
"type": "long"
},
"num_services_pending": {
"type": "long"
},
"num_services_unknown": {
"type": "long"
},
"num_services_unreachable": {
"type": "long"
},
"num_services_warning": {
"type": "long"
},
"passive_host_checks": {
"type": "long"
},
"passive_host_checks_15min": {
"type": "long"
},
"passive_host_checks_1min": {
"type": "long"
},
"passive_host_checks_5min": {
"type": "long"
},
"passive_service_checks": {
"type": "long"
},
"passive_service_checks_15min": {
"type": "long"
},
"passive_service_checks_1min": {
"type": "long"
},
"passive_service_checks_5min": {
"type": "long"
},
"uptime": {
"type": "long"
}
}
},
"tags": {
"ignore_above": 1024,
"type": "keyword"
},
"text": {
"norms": false,
"type": "text"
},
"timestamp": {
"type": "date"
},
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"users": {
"norms": false,
"type": "text"
}
}
}
},
"order": 0,
"settings": {
"index.mapping.total_fields.limit": 10000,
"index.refresh_interval": "5s"
},
"template": "icingabeat-*"
}

View File

@ -16,51 +16,61 @@ icingabeat:
# Password of the user
password: "icinga"
# Skip SSL verification
skip_ssl_verify: false
# Configure SSL verification. If `false` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `true`.
ssl.verify: true
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
########################### Icingabeat Eventstream ##########################
#
# Icingabeat supports capturing of an evenstream and periodical polling of the
# Icinga status data.
eventstream:
#
# Decide which events to receive from the event stream.
# The following event stream types are available:
#
# * CheckResult
# * StateChange
# * Notification
# * AcknowledgementSet
# * AcknowledgementCleared
# * CommentAdded
# * CommentRemoved
# * DowntimeAdded
# * DowntimeRemoved
# * DowntimeStarted
# * DowntimeTriggered
#
# To disable eventstream, leave the types empty or comment out the option
types:
- CheckResult
- StateChange
# Event streams can be filtered by attributes using the prefix 'event.'
#
# Example for the CheckResult type with the exit_code set to 2:
# filter: "event.check_result.exit_status==2"
#
# Example for the CheckResult type with the service matching the string
# pattern "mysql*":
# filter: 'match("mysql*", event.service)'
#
# To disable filtering set an empty string or comment out the filter option
filter: ""
# Decide which events to receive from the event stream.
# The following event stream types are available:
#
# * CheckResult
# * StateChange
# * Notification
# * AcknowledgementSet
# * AcknowledgementCleared
# * CommentAdded
# * CommentRemoved
# * DowntimeAdded
# * DowntimeRemoved
# * DowntimeStarted
# * DowntimeTriggered
#
# To disable eventstream, leave the types empty or comment out the option
eventstream.types:
- CheckResult
- StateChange
# Event streams can be filtered by attributes using the prefix 'event.'
#
# Example for the CheckResult type with the exit_code set to 2:
# filter: "event.check_result.exit_status==2"
#
# Example for the CheckResult type with the service matching the string
# pattern "mysql*":
# filter: 'match("mysql*", event.service)'
#
# To disable filtering set an empty string or comment out the filter option
eventstream.filter: ""
# Defines how fast to reconnect to the API on connection loss
retry_interval: 10s
eventstream.retry_interval: 10s
statuspoller:
# Interval at which the status API is called. Set to 0 to disable polling.
interval: 60s
########################### Icingabeat Statuspoller #########################
#
# Icingabeat can collect status information about Icinga 2 periodically. Set
# an interval at which the status API should be called. Set to 0 to disable
# polling.
statuspoller.interval: 60s
#================================ General =====================================
@ -77,10 +87,47 @@ icingabeat:
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
#============================= Elastic Cloud ==================================
# These settings simplify using icingabeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

View File

@ -3,14 +3,11 @@ package main
import (
"os"
"github.com/elastic/beats/libbeat/beat"
"github.com/icinga/icingabeat/beater"
"github.com/icinga/icingabeat/cmd"
)
func main() {
err := beat.Run("icingabeat", "", beater.New)
if err != nil {
if err := cmd.RootCmd.Execute(); err != nil {
os.Exit(1)
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 149 KiB

BIN
screenshots/status.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

View File

@ -2,6 +2,7 @@ import sys
sys.path.append('../../vendor/github.com/elastic/beats/libbeat/tests/system')
from beat.beat import TestCase
class BaseTest(TestCase):
@classmethod

View File

@ -10,10 +10,10 @@ class Test(BaseTest):
Basic test with exiting Icingabeat normally
"""
self.render_config_template(
path=os.path.abspath(self.working_dir) + "/log/*"
path=os.path.abspath(self.working_dir) + "/log/*"
)
icingabeat_proc = self.start_beat()
self.wait_until( lambda: self.log_contains("icingabeat is running"))
self.wait_until(lambda: self.log_contains("icingabeat is running"))
exit_code = icingabeat_proc.kill_and_wait()
assert exit_code == 0

View File

@ -6,8 +6,8 @@ os: Windows Server 2012 R2
# Environment variables
environment:
GOROOT: c:\go1.7.4
GOPATH: c:\gopath
GVM_DL: https://github.com/andrewkroh/gvm/releases/download/v0.0.1/gvm-windows-amd64.exe
PYWIN_DL: https://beats-files.s3.amazonaws.com/deps/pywin32-220.win32-py2.7.exe
matrix:
- PROJ: github.com\elastic\beats\metricbeat
@ -20,25 +20,29 @@ environment:
# Custom clone folder (variables are not expanded here).
clone_folder: c:\gopath\src\github.com\elastic\beats
# Cache mingw install until appveyor.yml is modified.
# Cache files until appveyor.yml is modified.
cache:
- C:\ProgramData\chocolatey\bin -> .appveyor.yml
- C:\ProgramData\chocolatey\lib -> .appveyor.yml
- C:\go1.7.4 -> .appveyor.yml
- C:\Users\appveyor\.gvm -> .go-version
- C:\Windows\System32\gvm.exe -> .appveyor.yml
- C:\tools\mingw64 -> .appveyor.yml
- C:\pywin_inst.exe -> .appveyor.yml
# Scripts that run after cloning repository
install:
- ps: c:\gopath\src\github.com\elastic\beats\libbeat\scripts\install-go.ps1 -version 1.7.4
- set PATH=%GOROOT%\bin;%PATH%
# AppVeyor installed mingw is 32-bit only.
- ps: >-
if(!(Test-Path "C:\Windows\System32\gvm.exe")) {
wget "$env:GVM_DL" -Outfile C:\Windows\System32\gvm.exe
}
- ps: gvm --format=powershell $(Get-Content .go-version) | Invoke-Expression
# AppVeyor installed mingw is 32-bit only so install 64-bit version.
- ps: >-
if(!(Test-Path "C:\tools\mingw64\bin\gcc.exe")) {
cinst mingw > mingw-install.txt
Push-AppveyorArtifact mingw-install.txt
}
- set PATH=C:\tools\mingw64\bin;%GOROOT%\bin;%PATH%
- set PATH=C:\tools\mingw64\bin;%PATH%
- set PATH=%GOPATH%\bin;%PATH%
- go install github.com/elastic/beats/vendor/github.com/pierrre/gotestcover
- go version
@ -51,7 +55,7 @@ install:
- set PYTHONPATH=C:\Python27
- set PATH=%PYTHONPATH%;%PYTHONPATH%\Scripts;%PATH%
- python --version
- pip install jinja2 nose nose-timer PyYAML redis elasticsearch
- pip install six jinja2 nose nose-timer PyYAML redis elasticsearch
- easy_install C:/pywin_inst.exe
# To run your custom scripts instead of automatic MSBuild

View File

@ -7,6 +7,10 @@ end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
[.go]
indent_size = 4
indent_style = tab
[*.json]
indent_size = 4
indent_style = space

View File

@ -1,18 +1,21 @@
# Directories
/.vagrant
/.idea
/.vscode
/build
/*/data
/*/logs
/*/_meta/kibana/index-pattern
/*/fields.yml
/*/*.template*.json
# Files
.DS_Store
/glide.lock
/beats.iml
*.dev.yml
*.generated.yml
coverage.out
.python-version
beat.db
# Editor swap files
*.swp

1
vendor/github.com/elastic/beats/.go-version generated vendored Normal file
View File

@ -0,0 +1 @@
1.9.2

13
vendor/github.com/elastic/beats/.pylintrc generated vendored Normal file
View File

@ -0,0 +1,13 @@
[MESSAGES CONTROL]
disable=too-many-lines,too-many-public-methods,too-many-statements
[BASIC]
method-rgx=[a-z_][a-z0-9_]{2,50}$
[FORMAT]
max-line-length=120

View File

@ -12,82 +12,118 @@ env:
global:
# Cross-compile for amd64 only to speed up testing.
- GOX_FLAGS="-arch amd64"
- DOCKER_COMPOSE_VERSION: 1.9.0
- &go_version 1.7.4
- DOCKER_COMPOSE_VERSION=1.11.1
- GO_VERSION="$(cat .go-version)"
- TRAVIS_ETCD_VERSION=v3.2.8
matrix:
jobs:
include:
# General checks
- os: linux
env: TARGETS="check"
go: *go_version
go: $GO_VERSION
stage: check
# Filebeat
- os: linux
env: TARGETS="-C filebeat testsuite"
go: *go_version
go: $GO_VERSION
stage: test
- os: osx
env: TARGETS="TEST_ENVIRONMENT=0 -C filebeat testsuite"
go: *go_version
go: $GO_VERSION
stage: test
# Heartbeat
- os: linux
env: TARGETS="-C heartbeat testsuite"
go: *go_version
go: $GO_VERSION
stage: test
- os: osx
env: TARGETS="TEST_ENVIRONMENT=0 -C heartbeat testsuite"
go: *go_version
go: $GO_VERSION
stage: test
# Auditbeat
- os: linux
env: TARGETS="-C auditbeat testsuite"
go: $GO_VERSION
stage: test
# Libbeat
- os: linux
env: TARGETS="-C libbeat testsuite"
go: *go_version
go: $GO_VERSION
stage: test
- os: linux
env: TARGETS="-C libbeat crosscompile"
go: *go_version
go: $GO_VERSION
stage: test
# Metricbeat
- os: linux
env: TARGETS="-C metricbeat testsuite"
go: *go_version
go: $GO_VERSION
stage: test
- os: osx
env: TARGETS="TEST_ENVIRONMENT=0 -C metricbeat testsuite"
go: *go_version
go: $GO_VERSION
stage: test
- os: linux
env: TARGETS="-C metricbeat crosscompile"
go: *go_version
go: $GO_VERSION
stage: test
# Packetbeat
- os: linux
env: TARGETS="-C packetbeat testsuite"
go: *go_version
go: $GO_VERSION
stage: test
# Winlogbeat
- os: linux
env: TARGETS="-C winlogbeat crosscompile"
go: *go_version
# Dashboards
- os: linux
env: TARGETS="-C libbeat/dashboards"
go: *go_version
go: $GO_VERSION
stage: test
# Generators
- os: linux
env: TARGETS="-C generator/metricbeat test"
go: *go_version
go: $GO_VERSION
stage: test
- os: linux
env: TARGETS="-C generator/beat test"
go: *go_version
go: $GO_VERSION
stage: test
# Kubernetes
- os: linux
install: deploy/kubernetes/.travis/setup.sh
env:
- TARGETS="-C deploy/kubernetes test"
- TRAVIS_KUBE_VERSION=v1.6.11
stage: test
- os: linux
install: deploy/kubernetes/.travis/setup.sh
env:
- TARGETS="-C deploy/kubernetes test"
- TRAVIS_KUBE_VERSION=v1.7.7
stage: test
- os: linux
install: deploy/kubernetes/.travis/setup.sh
env:
- TARGETS="-C deploy/kubernetes test"
- TRAVIS_KUBE_VERSION=v1.8.0
stage: test
addons:
apt:
packages:
- python-virtualenv
- libpcap-dev
- geoip-database
before_install:
- python --version
- umask 022
- chmod -R go-w $GOPATH/src/github.com/elastic/beats
# Docker-compose installation
@ -104,11 +140,15 @@ script:
notifications:
slack:
on_success: change
on_failure: always
on_pull_requests: false
rooms:
secure: "e25J5puEA31dOooTI4T+K+zrTs8XeWIGq2cgmiPt9u/g7eqWeQj1UJnVsr8GOu1RPDyuJZJHXqfrvuOYJTdHzXbwjD0JTbwwVVZMkkZW2SWZHG46HCXPiucjWXEr3hXJKBJDDpIx6VxrN7r17dejv1biQ8QuEFZfiB1H8kbH/ho="
after_success:
# Copy full.cov to coverage.txt because codecov.io requires this file
- test -f auditbeat/build/coverage/full.cov && bash <(curl -s https://codecov.io/bash) -f auditbeat/build/coverage/full.cov
- test -f filebeat/build/coverage/full.cov && bash <(curl -s https://codecov.io/bash) -f filebeat/build/coverage/full.cov
- test -f heartbeat/build/coverage/full.cov && bash <(curl -s https://codecov.io/bash) -f heartbeat/build/coverage/full.cov
- test -f libbeat/build/coverage/full.cov && bash <(curl -s https://codecov.io/bash) -f libbeat/build/coverage/full.cov

File diff suppressed because it is too large Load Diff

View File

@ -11,108 +11,7 @@ There are many ways to contribute, from writing tutorials or blog posts,
improving the documentation, submitting bug reports and feature requests or
writing code for implementing a whole new protocol.
If you have a bugfix or new feature that you would like to contribute, please
start by opening a topic on the [forums](https://discuss.elastic.co/c/beats).
It may be that somebody is already working on it, or that there are particular
issues that you should know about before implementing the change.
We enjoy working with contributors to get their code accepted. There are many
approaches to fixing a problem and it is important to find the best approach
before writing too much code.
The process for contributing to any of the Elastic repositories is similar.
## Contribution Steps
1. Please make sure you have signed our [Contributor License
Agreement](https://www.elastic.co/contributor-agreement/). We are not
asking you to assign copyright to us, but to give us the right to distribute
your code without restriction. We ask this of all contributors in order to
assure our users of the origin and continuing existence of the code. You
only need to sign the CLA once.
2. Send a pull request! Push your changes to your fork of the repository and
[submit a pull
request](https://help.github.com/articles/using-pull-requests). In the pull
request, describe what your changes do and mention any bugs/issues related
to the pull request.
## Adding a new Beat
If you want to create a new Beat, please read our [developer
guide](https://www.elastic.co/guide/en/beats/libbeat/current/new-beat.html).
You don't need to submit the code to this repository. Most new Beats start in
their own repository and just make use of the libbeat packages. After you have
a working Beat that you'd like to share with others, open a PR to add it to our
list of [community
Beats](https://github.com/elastic/beats/blob/master/libbeat/docs/communitybeats.asciidoc).
## Setting up your dev environment
The Beats are Go programs, so install the latest version of
[golang](http://golang.org/) if you don't have it already. The current Go version
used for development is Golang 1.7.4.
The location where you clone is important. Please clone under the source
directory of your `GOPATH`. If you don't have `GOPATH` already set, you can
simply set it to your home directory (`export GOPATH=$HOME`).
$ mkdir -p ${GOPATH}/src/github.com/elastic
$ cd ${GOPATH}/src/github.com/elastic
$ git clone https://github.com/elastic/beats.git
Note: If you have multiple go paths use `${GOPATH%%:*}`instead of `${GOPATH}`.
Then you can compile a particular Beat by using the Makefile. For example, for
Packetbeat:
$ cd beats/packetbeat
$ make
Some of the Beats might have extra development requirements, in which case you'll find a
CONTRIBUTING.md file in the Beat directory.
## Update scripts
The Beats use a variety of scripts based on Python to generate configuration files
and documentation. The command used for this is:
$ make update
This command has the following dependencies:
* Python >=2.7.9
* [virtualenv](https://virtualenv.pypa.io/en/latest/) for Python
Virtualenv can be installed with the command `easy_install virtualenv` or `pip install virtualenv`.
More details can be found [here](https://virtualenv.pypa.io/en/latest/installation.html).
## Testing
You can run the whole testsuite with the following command:
$ make testsuite
Running the testsuite has the following requirements:
* Python >=2.7.9
* Docker >=1.10.0
* Docker-compose >= 1.8.0
## Documentation
The documentation for each Beat is located under {beatname}/docs and is based on asciidoc. After changing the docs,
you should verify that the docs are still building to avoid breaking the automated docs build. To build the docs run
`make docs`. If you want to preview the docs for a specific Beat, run `make docs-preview`
inside the folder for the Beat. This will automatically open your browser with the docs for preview.
## Dependencies
To manage the `vendor/` folder we use
[glide](https://github.com/Masterminds/glide), which uses
[glide.yaml](glide.yaml) as a manifest file for the dependencies. Please see
the glide documentation on how to add or update vendored dependencies.
If you want to contribute to the Beats project, you can start by reading
the [contributing guidelines](https://www.elastic.co/guide/en/beats/devguide/current/beats-contributing.html)
in the _Beats Developer Guide_.

View File

@ -1,15 +0,0 @@
FROM golang:1.7.4
MAINTAINER Nicolas Ruflin <ruflin@elastic.co>
RUN set -x && \
apt-get update && \
apt-get install -y netcat && \
apt-get clean
COPY libbeat/scripts/docker-entrypoint.sh /entrypoint.sh
RUN mkdir -p /etc/pki/tls/certs
COPY testing/environments/docker/logstash/pki/tls/certs/logstash.crt /etc/pki/tls/certs/logstash.crt
# Create a copy of the repository inside the container.
COPY . /go/src/github.com/elastic/beats/

View File

@ -1,13 +0,0 @@
Copyright (c) 20122016 Elasticsearch <http://www.elastic.co>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

13
vendor/github.com/elastic/beats/LICENSE.txt generated vendored Normal file
View File

@ -0,0 +1,13 @@
Copyright (c) 20122017 Elastic <http://www.elastic.co>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,102 +1,125 @@
BUILD_DIR=build
COVERAGE_DIR=${BUILD_DIR}/coverage
BEATS=packetbeat filebeat winlogbeat metricbeat heartbeat
PROJECTS=libbeat ${BEATS}
BUILD_DIR=$(CURDIR)/build
COVERAGE_DIR=$(BUILD_DIR)/coverage
BEATS=packetbeat filebeat winlogbeat metricbeat heartbeat auditbeat
PROJECTS=libbeat $(BEATS)
PROJECTS_ENV=libbeat filebeat metricbeat
SNAPSHOT?=yes
PYTHON_ENV?=$(BUILD_DIR)/python-env
VIRTUALENV_PARAMS?=
FIND=find . -type f -not -path "*/vendor/*" -not -path "*/build/*" -not -path "*/.git/*"
GOLINT=golint
GOLINT_REPO=github.com/golang/lint/golint
REVIEWDOG=reviewdog
REVIEWDOG_OPTIONS?=-diff "git diff master"
REVIEWDOG_REPO=github.com/haya14busa/reviewdog/cmd/reviewdog
# Runs complete testsuites (unit, system, integration) for all beats with coverage and race detection.
# Also it builds the docs and the generators
.PHONY: testsuite
testsuite:
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) testsuite || exit 1;)
#$(MAKE) -C generator test
@$(foreach var,$(PROJECTS),$(MAKE) -C $(var) testsuite || exit 1;)
.PHONY: setup-commit-hook
setup-commit-hook:
@cp script/pre_commit.sh .git/hooks/pre-commit
@chmod 751 .git/hooks/pre-commit
stop-environments:
$(foreach var,$(PROJECTS_ENV),$(MAKE) -C $(var) stop-environment || exit 0;)
@$(foreach var,$(PROJECTS_ENV),$(MAKE) -C $(var) stop-environment || exit 0;)
# Runs unit and system tests without coverage and race detection.
.PHONY: test
test:
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) test || exit 1;)
@$(foreach var,$(PROJECTS),$(MAKE) -C $(var) test || exit 1;)
# Runs unit tests without coverage and race detection.
.PHONY: unit
unit:
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) unit || exit 1;)
@$(foreach var,$(PROJECTS),$(MAKE) -C $(var) unit || exit 1;)
.PHONY: coverage-report
coverage-report:
mkdir -p ${COVERAGE_DIR}
# Writes atomic mode on top of file
echo 'mode: atomic' > ./${COVERAGE_DIR}/full.cov
# Collects all coverage files and skips top line with mode
-tail -q -n +2 ./filebeat/${COVERAGE_DIR}/*.cov >> ./${COVERAGE_DIR}/full.cov
-tail -q -n +2 ./packetbeat/${COVERAGE_DIR}/*.cov >> ./${COVERAGE_DIR}/full.cov
-tail -q -n +2 ./winlogbeat/${COVERAGE_DIR}/*.cov >> ./${COVERAGE_DIR}/full.cov
-tail -q -n +2 ./libbeat/${COVERAGE_DIR}/*.cov >> ./${COVERAGE_DIR}/full.cov
go tool cover -html=./${COVERAGE_DIR}/full.cov -o ${COVERAGE_DIR}/full.html
@mkdir -p $(COVERAGE_DIR)
@echo 'mode: atomic' > ./$(COVERAGE_DIR)/full.cov
@# Collects all coverage files and skips top line with mode
@$(foreach var,$(PROJECTS),tail -q -n +2 ./$(var)/$(COVERAGE_DIR)/*.cov >> ./$(COVERAGE_DIR)/full.cov || true;)
@go tool cover -html=./$(COVERAGE_DIR)/full.cov -o $(COVERAGE_DIR)/full.html
@echo "Generated coverage report $(COVERAGE_DIR)/full.html"
.PHONY: update
update:
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) update || exit 1;)
update: notice
@$(foreach var,$(PROJECTS),$(MAKE) -C $(var) update || exit 1;)
@$(MAKE) -C deploy/kubernetes all
.PHONY: clean
clean:
rm -rf build
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) clean || exit 1;)
$(MAKE) -C generator clean
@rm -rf build
@$(foreach var,$(PROJECTS),$(MAKE) -C $(var) clean || exit 1;)
@$(MAKE) -C generator clean
# Cleans up the vendor directory from unnecessary files
# This should always be run after updating the dependencies
.PHONY: clean-vendor
clean-vendor:
sh script/clean_vendor.sh
@sh script/clean_vendor.sh
.PHONY: check
check:
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) check || exit 1;)
# Validate that all updates were committed
$(MAKE) update
git update-index --refresh
git diff-index --exit-code HEAD --
check: python-env
@$(foreach var,$(PROJECTS),$(MAKE) -C $(var) check || exit 1;)
@# Checks also python files which are not part of the beats
@$(FIND) -name *.py -exec $(PYTHON_ENV)/bin/autopep8 -d --max-line-length 120 {} \; | (! grep . -q) || (echo "Code differs from autopep8's style" && false)
@# Validate that all updates were committed
@$(MAKE) update
@git diff | cat
@git update-index --refresh
@git diff-index --exit-code HEAD --
# Corrects spelling errors
.PHONY: misspell
misspell:
go get github.com/client9/misspell
# Ignore Kibana files (.json)
$(FIND) -not -path "*.json" -name '*' -exec misspell -w {} \;
.PHONY: fmt
fmt:
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) fmt || exit 1;)
fmt: python-env
@$(foreach var,$(PROJECTS),$(MAKE) -C $(var) fmt || exit 1;)
@# Cleans also python files which are not part of the beats
@$(FIND) -name "*.py" -exec $(PYTHON_ENV)/bin/autopep8 --in-place --max-line-length 120 {} \;
.PHONY: simplify
simplify:
$(foreach var,$(PROJECTS),$(MAKE) -C $(var) simplify || exit 1;)
.PHONY: lint
lint:
@go get $(GOLINT_REPO) $(REVIEWDOG_REPO)
$(REVIEWDOG) $(REVIEWDOG_OPTIONS)
# Collects all dashboards and generates dashboard folder for https://github.com/elastic/beats-dashboards/tree/master/dashboards
.PHONY: beats-dashboards
beats-dashboards:
mkdir -p build/dashboards
$(foreach var,$(BEATS),cp -r $(var)/_meta/kibana/ build/dashboards/$(var) || exit 1;)
@mkdir -p build/dashboards
@$(foreach var,$(BEATS),cp -r $(var)/_meta/kibana/ build/dashboards/$(var) || exit 1;)
# Builds the documents for each beat
.PHONY: docs
docs:
sh libbeat/scripts/build_docs.sh ${PROJECTS}
@$(foreach var,$(PROJECTS),BUILD_DIR=${BUILD_DIR} $(MAKE) -C $(var) docs || exit 1;)
sh ./script/build_docs.sh dev-guide github.com/elastic/beats/docs/devguide ${BUILD_DIR}
.PHONY: package
package: update beats-dashboards
$(foreach var,$(BEATS),SNAPSHOT=$(SNAPSHOT) $(MAKE) -C $(var) package || exit 1;)
@$(foreach var,$(BEATS),SNAPSHOT=$(SNAPSHOT) $(MAKE) -C $(var) package || exit 1;)
# build the dashboards package
echo "Start building the dashboards package"
mkdir -p build/upload/
BUILD_DIR=${shell pwd}/build SNAPSHOT=$(SNAPSHOT) $(MAKE) -C dev-tools/packer package-dashboards ${shell pwd}/build/upload/build_id.txt
mv build/upload build/dashboards-upload
@echo "Start building the dashboards package"
@mkdir -p build/upload/
@BUILD_DIR=${BUILD_DIR} SNAPSHOT=$(SNAPSHOT) $(MAKE) -C dev-tools/packer package-dashboards ${BUILD_DIR}/upload/build_id.txt
@mv build/upload build/dashboards-upload
# Copy build files over to top build directory
mkdir -p build/upload/
$(foreach var,$(BEATS),cp -r $(var)/build/upload/ build/upload/$(var) || exit 1;)
cp -r build/dashboards-upload build/upload/dashboards
# Run tests on the generated packages.
go test ./dev-tools/package_test.go -files "${shell pwd}/build/upload/*/*"
@# Copy build files over to top build directory
@mkdir -p build/upload/
@$(foreach var,$(BEATS),cp -r $(var)/build/upload/ build/upload/$(var) || exit 1;)
@cp -r build/dashboards-upload build/upload/dashboards
@# Run tests on the generated packages.
@go test ./dev-tools/package_test.go -files "${BUILD_DIR}/upload/*/*"
# Upload nightly builds to S3
.PHONY: upload-nightlies-s3
@ -116,5 +139,17 @@ upload-release:
aws s3 cp --recursive --acl public-read build/upload s3://download.elasticsearch.org/beats/
.PHONY: notice
notice:
python dev-tools/generate_notice.py .
notice: python-env
@echo "Generating NOTICE"
@$(PYTHON_ENV)/bin/python dev-tools/generate_notice.py .
# Sets up the virtual python environment
.PHONY: python-env
python-env:
@test -d $(PYTHON_ENV) || virtualenv $(VIRTUALENV_PARAMS) $(PYTHON_ENV)
@$(PYTHON_ENV)/bin/pip install -q --upgrade pip autopep8 six
# Tests if apm works with the current code
.PHONY: test-apm
test-apm:
sh ./script/test_apm.sh

1994
vendor/github.com/elastic/beats/NOTICE generated vendored

File diff suppressed because it is too large Load Diff

3959
vendor/github.com/elastic/beats/NOTICE.txt generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@ -27,7 +27,7 @@ Beat | Description
[Winlogbeat](https://github.com/elastic/beats/tree/master/winlogbeat) | Fetches and ships Windows Event logs
In addition to the above Beats, which are officially supported by
[Elastic](elastic.co), the
[Elastic](https://elastic.co), the
community has created a set of other Beats that make use of libbeat but live
outside of this Github repository. We maintain a list of community Beats
[here](https://www.elastic.co/guide/en/beats/libbeat/master/community-beats.html).
@ -67,7 +67,7 @@ Please start by reading our [CONTRIBUTING](CONTRIBUTING.md) file.
If you are creating a new Beat, you don't need to submit the code to this
repository. You can simply start working in a new repository and make use of
the libbeat packages, by following our [developer
guide](https://www.elastic.co/guide/en/beats/libbeat/master/new-beat.html).
guide](https://www.elastic.co/guide/en/beats/libbeat/current/new-beat.html).
After you have a working prototype, open a pull request to add your Beat to the
list of [community
Beats](https://github.com/elastic/beats/blob/master/libbeat/docs/communitybeats.asciidoc).

View File

@ -55,6 +55,15 @@ cd ~/go/src/github.com/elastic
if [ -d "/vagrant" ]; then ln -s /vagrant beats; fi
SCRIPT
# Linux GVM
$linuxGvmProvision = <<SCRIPT
mkdir -p ~/bin
curl -sL -o ~/bin/gvm https://github.com/andrewkroh/gvm/releases/download/v0.0.1/gvm-linux-amd64
chmod +x ~/bin/gvm
echo 'export PATH=~/bin:$PATH' >> ~/.bash_profile
echo 'eval "$(gvm 1.9.2)"' >> ~/.bash_profile
SCRIPT
Vagrant.configure(2) do |config|
# Windows Server 2012 R2
@ -92,6 +101,7 @@ Vagrant.configure(2) do |config|
config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", :nfs => true, disabled: true
#config.vm.network "private_network", ip: "192.168.135.18"
freebsd.vm.hostname = "beats-tester"
freebsd.vm.provision "shell", inline: $unixProvision, privileged: false
end
@ -109,6 +119,18 @@ Vagrant.configure(2) do |config|
openbsd.vm.provision "shell", inline: $unixProvision, privileged: false
end
# CentOS 7
config.vm.define "centos7", primary: true do |centos7|
#centos7.vm.box = "http://cloud.centos.org/centos/7/vagrant/x86_64/images/CentOS-7-x86_64-Vagrant-1706_02.VirtualBox.box"
centos7.vm.box = "ubuntu/precise64"
centos7.vm.network :forwarded_port, guest: 22, host: 2226, id: "ssh", auto_correct: true
centos7.vm.provision "shell", inline: $unixProvision, privileged: false
centos7.vm.provision "shell", inline: $linuxGvmProvision, privileged: false
centos7.vm.synced_folder ".", "/vagrant", type: "virtualbox"
end
end
# -*- mode: ruby -*-

10
vendor/github.com/elastic/beats/auditbeat/.gitignore generated vendored Normal file
View File

@ -0,0 +1,10 @@
build
_meta/kibana
_meta/beat.yml
_meta/beat.reference.yml
module/*/_meta/config.yml
/auditbeat
/auditbeat.test
/docs/html_docs

77
vendor/github.com/elastic/beats/auditbeat/Makefile generated vendored Normal file
View File

@ -0,0 +1,77 @@
BEAT_NAME=auditbeat
BEAT_TITLE=Auditbeat
BEAT_DESCRIPTION=Audit the activities of users and processes on your system.
SYSTEM_TESTS=false
TEST_ENVIRONMENT=false
# Path to the libbeat Makefile
-include ../libbeat/scripts/Makefile
# This is called by the beats packer before building starts
.PHONY: before-build
before-build:
@cat ${ES_BEATS}/auditbeat/_meta/common.p1.yml \
<(go run scripts/generate_config.go -os windows -concat) \
${ES_BEATS}/auditbeat/_meta/common.p2.yml \
${ES_BEATS}/libbeat/_meta/config.yml > \
${PREFIX}/${BEAT_NAME}-win.yml
@cat ${ES_BEATS}/auditbeat/_meta/common.reference.yml \
<(go run scripts/generate_config.go -os windows -concat) \
${ES_BEATS}/libbeat/_meta/config.reference.yml > \
${PREFIX}/${BEAT_NAME}-win.reference.yml
@cat ${ES_BEATS}/auditbeat/_meta/common.p1.yml \
<(go run scripts/generate_config.go -os darwin -concat) \
${ES_BEATS}/auditbeat/_meta/common.p2.yml \
${ES_BEATS}/libbeat/_meta/config.yml > \
${PREFIX}/${BEAT_NAME}-darwin.yml
@cat ${ES_BEATS}/auditbeat/_meta/common.reference.yml \
<(go run scripts/generate_config.go -os darwin -concat) \
${ES_BEATS}/libbeat/_meta/config.reference.yml > \
${PREFIX}/${BEAT_NAME}-darwin.reference.yml
@cat ${ES_BEATS}/auditbeat/_meta/common.p1.yml \
<(go run scripts/generate_config.go -os linux -concat) \
${ES_BEATS}/auditbeat/_meta/common.p2.yml \
${ES_BEATS}/libbeat/_meta/config.yml > \
${PREFIX}/${BEAT_NAME}-linux.yml
@cat ${ES_BEATS}/auditbeat/_meta/common.reference.yml \
<(go run scripts/generate_config.go -os linux -concat) \
${ES_BEATS}/libbeat/_meta/config.reference.yml > \
${PREFIX}/${BEAT_NAME}-linux.reference.yml
# Collects all dependencies and then calls update
.PHONY: collect
collect: fields collect-docs configs kibana
# Collects all module and metricset fields
.PHONY: fields
fields: python-env
@mkdir -p _meta
@cp ${ES_BEATS}/metricbeat/_meta/fields.common.yml _meta/fields.generated.yml
@${PYTHON_ENV}/bin/python ${ES_BEATS}/metricbeat/scripts/fields_collector.py >> _meta/fields.generated.yml
# Collects all module configs
.PHONY: configs
configs: python-env
@cat ${ES_BEATS}/auditbeat/_meta/common.p1.yml \
<(go run scripts/generate_config.go -os linux -concat) \
${ES_BEATS}/auditbeat/_meta/common.p2.yml > _meta/beat.yml
@cat ${ES_BEATS}/auditbeat/_meta/common.reference.yml \
<(go run scripts/generate_config.go -os linux -ref -concat) > _meta/beat.reference.yml
# Collects all module docs
.PHONY: collect-docs
collect-docs: python-env
@rm -rf docs/modules
@mkdir -p docs/modules
@go run scripts/generate_config.go -os linux
@${PYTHON_ENV}/bin/python ${ES_BEATS}/auditbeat/scripts/docs_collector.py --beat ${BEAT_NAME}
# Collects all module dashboards
.PHONY: kibana
kibana:
@-rm -rf _meta/kibana/dashboard _meta/kibana/search _meta/kibana/visualization # Skip index-pattern
@mkdir -p _meta/kibana
@-cp -pr module/*/_meta/kibana _meta/

View File

@ -0,0 +1,12 @@
###################### Auditbeat Configuration Example #########################
# This is an example configuration file highlighting only the most common
# options. The auditbeat.reference.yml file from the same directory contains all
# the supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/auditbeat/index.html
#========================== Modules configuration =============================
auditbeat.modules:

View File

@ -0,0 +1,6 @@
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

View File

@ -0,0 +1,31 @@
########################## Auditbeat Configuration #############################
# This is a reference configuration file documenting all non-deprecated options
# in comments. For a shorter configuration example that contains only the most
# common options, please see auditbeat.yml in the same directory.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/auditbeat/index.html
#============================ Config Reloading ================================
# Config reloading allows to dynamically load modules. Each file which is
# monitored must contain one or multiple modules as a list.
auditbeat.config.modules:
# Glob pattern for configuration reloading
path: ${path.config}/conf.d/*.yml
# Period on which files under path should be checked for changes
reload.period: 10s
# Set to true to enable config reloading
reload.enabled: false
# Maximum amount of time to randomly delay the start of a metricset. Use 0 to
# disable startup delay.
auditbeat.max_start_delay: 10s
#========================== Modules configuration =============================
auditbeat.modules:

View File

@ -0,0 +1,36 @@
- key: common
title: Common
description: >
Contains common fields available in all event types.
fields:
- name: metricset.module
description: >
The name of the module that generated the event.
- name: metricset.name
description: >
The name of the metricset that generated the event.
- name: metricset.host
description: >
Hostname of the machine from which the metricset was collected. This
field may not be present when the data was collected locally.
- name: metricset.rtt
type: long
required: true
description: >
Event round trip time in microseconds.
- name: metricset.namespace
type: keyword
description: >
Namespace of dynamic metricsets.
- name: type
required: true
example: metricsets
description: >
The document type. Always set to "metricsets".

View File

@ -0,0 +1,872 @@
########################## Auditbeat Configuration #############################
# This is a reference configuration file documenting all non-deprecated options
# in comments. For a shorter configuration example that contains only the most
# common options, please see auditbeat.yml in the same directory.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/auditbeat/index.html
#============================ Config Reloading ================================
# Config reloading allows to dynamically load modules. Each file which is
# monitored must contain one or multiple modules as a list.
auditbeat.config.modules:
# Glob pattern for configuration reloading
path: ${path.config}/conf.d/*.yml
# Period on which files under path should be checked for changes
reload.period: 10s
# Set to true to enable config reloading
reload.enabled: false
# Maximum amount of time to randomly delay the start of a metricset. Use 0 to
# disable startup delay.
auditbeat.max_start_delay: 10s
#========================== Modules configuration =============================
auditbeat.modules:
# The kernel metricset collects events from the audit framework in the Linux
# kernel. You need to specify audit rules for the events that you want to audit.
- module: audit
metricsets: [kernel]
kernel.resolve_ids: true
kernel.failure_mode: silent
kernel.backlog_limit: 8196
kernel.rate_limit: 0
kernel.include_raw_message: false
kernel.include_warnings: false
kernel.audit_rules: |
## Define audit rules here.
## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
## examples or add your own rules.
## If you are on a 64 bit platform, everything should be running
## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
## because this might be a sign of someone exploiting a hole in the 32
## bit API.
#-a always,exit -F arch=b32 -S all -F key=32bit-abi
## Executions.
#-a always,exit -F arch=b64 -S execve,execveat -k exec
## External access.
#-a always,exit -F arch=b64 -S accept,bind,connect,recvfrom -F key=external-access
## Identity changes.
#-w /etc/group -p wa -k identity
#-w /etc/passwd -p wa -k identity
#-w /etc/gshadow -p wa -k identity
## Unauthorized access attempts.
#-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
#-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
# The file integrity metricset sends events when files are changed (created,
# updated, deleted). The events contain file metadata and hashes.
- module: audit
metricsets: [file]
file.paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
# Scan over the configured file paths at startup and send events for new or
# modified files since the last time Auditbeat was running.
file.scan_at_start: true
# Average scan rate. This throttles the amount of CPU and I/O that Auditbeat
# consumes at startup while scanning. Default is "50 MiB".
file.scan_rate_per_sec: 50 MiB
# Limit on the size of files that will be hashed. Default is "100 MiB".
file.max_file_size: 100 MiB
# Hash types to compute when the file changes. Supported types are md5, sha1,
# sha224, sha256, sha384, sha512, sha512_224, sha512_256, sha3_224, sha3_256,
# sha3_384 and sha3_512. Default is sha1.
file.hash_types: [sha1]
#================================ General ======================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:
# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
#fields_under_root: false
# Internal queue configuration for buffering events to be published.
#queue:
# Queue type by name (default 'mem')
# The memory queue will present all available events (up to the outputs
# bulk_max_size) to the output, the moment the output is ready to server
# another batch of events.
#mem:
# Max number of events the queue can buffer.
#events: 4096
# Hints the minimum number of events stored in the queue,
# before providing a batch of events to the outputs.
# A value of 0 (the default) ensures events are immediately available
# to be sent to the outputs.
#flush.min_events: 2048
# Maximum duration after which events are available to the outputs,
# if the number of events stored in the queue is < min_flush_events.
#flush.timeout: 1s
# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:
#================================ Processors ===================================
# Processors are used to reduce the number of fields in the exported event or to
# enhance the event with external metadata. This section defines a list of
# processors that are applied one by one and the first one receives the initial
# event:
#
# event -> filter1 -> event1 -> filter2 ->event2 ...
#
# The supported processors are drop_fields, drop_event, include_fields, and
# add_cloud_metadata.
#
# For example, you can use the following processors to keep the fields that
# contain CPU load percentages, but remove the fields that contain CPU ticks
# values:
#
#processors:
#- include_fields:
# fields: ["cpu"]
#- drop_fields:
# fields: ["cpu.user", "cpu.system"]
#
# The following example drops the events that have the HTTP response code 200:
#
#processors:
#- drop_event:
# when:
# equals:
# http.code: 200
#
# The following example enriches each event with metadata from the cloud
# provider about the host machine. It works on EC2, GCE, DigitalOcean,
# Tencent Cloud, and Alibaba Cloud.
#
#processors:
#- add_cloud_metadata: ~
#
# The following example enriches each event with the machine's local time zone
# offset from UTC.
#
#processors:
#- add_locale:
# format: offset
#
# The following example enriches each event with docker metadata, it matches
# given fields to an existing container id and adds info from that container:
#
#processors:
#- add_docker_metadata:
# host: "unix:///var/run/docker.sock"
# match_fields: ["system.process.cgroup.id"]
# # To connect to Docker over TLS you must specify a client and CA certificate.
# #ssl:
# # certificate_authority: "/etc/pki/root/ca.pem"
# # certificate: "/etc/pki/client/cert.pem"
# # key: "/etc/pki/client/cert.key"
#
# The following example enriches each event with docker metadata, it matches
# container id from log path available in `source` field (by default it expects
# it to be /var/lib/docker/containers/*/*.log).
#
#processors:
#- add_docker_metadata: ~
#============================= Elastic Cloud ==================================
# These settings simplify using auditbeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs ======================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output -------------------------------
output.elasticsearch:
# Boolean flag to enable or disable the output module.
#enabled: true
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
hosts: ["localhost:9200"]
# Set gzip compression level.
#compression_level: 0
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
# Dictionary of HTTP parameters to pass within the url with index operations.
#parameters:
#param1: value1
#param2: value2
# Number of workers per Elasticsearch host.
#worker: 1
# Optional index name. The default is "auditbeat" plus date
# and generates [auditbeat-]YYYY.MM.DD keys.
# In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
#index: "auditbeat-%{[beat.version]}-%{+yyyy.MM.dd}"
# Optional ingest node pipeline. By default no pipeline will be used.
#pipeline: ""
# Optional HTTP Path
#path: "/elasticsearch"
# Custom HTTP headers to add to each request
#headers:
# X-My-Header: Contents of the header
# Proxy server url
#proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90
# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#----------------------------- Logstash output ---------------------------------
#output.logstash:
# Boolean flag to enable or disable the output module.
#enabled: true
# The Logstash hosts
#hosts: ["localhost:5044"]
# Number of workers per Logstash host.
#worker: 1
# Set gzip compression level.
#compression_level: 3
# Optional maximum time to live for a connection to Logstash, after which the
# connection will be re-established. A value of `0s` (the default) will
# disable this feature.
#
# Not yet supported for async connections (i.e. with the "pipelining" option set)
#ttl: 30s
# Optional load balance the events between the Logstash hosts. Default is false.
#loadbalance: false
# Number of batches to be sent asynchronously to logstash while processing
# new batches.
#pipelining: 5
# If enabled only a subset of events in a batch of events is transferred per
# transaction. The number of events to be sent increases up to `bulk_max_size`
# if no error is encountered.
#slow_start: false
# Optional index name. The default index name is set to auditbeat
# in all lowercase.
#index: 'auditbeat'
# SOCKS5 proxy server URL
#proxy_url: socks5://user:password@socks5-server:2233
# Resolve names locally when using a proxy server. Defaults to false.
#proxy_use_local_resolver: false
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Kafka output ----------------------------------
#output.kafka:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Kafka broker addresses from where to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published
# to.
#hosts: ["localhost:9092"]
# The Kafka topic used for produced events. The setting can be a format string
# using any event field. To set the topic from document type use `%{[type]}`.
#topic: beats
# The Kafka event key setting. Use format string to create unique event key.
# By default no event key will be generated.
#key: ''
# The Kafka event partitioning strategy. Default hashing strategy is `hash`
# using the `output.kafka.key` setting or randomly distributes events if
# `output.kafka.key` is not configured.
#partition.hash:
# If enabled, events will only be published to partitions with reachable
# leaders. Default is false.
#reachable_only: false
# Configure alternative event field names used to compute the hash value.
# If empty `output.kafka.key` setting will be used.
# Default value is empty list.
#hash: []
# Authentication details. Password is required if username is set.
#username: ''
#password: ''
# Kafka version auditbeat is assumed to run against. Defaults to the oldest
# supported stable version (currently version 0.8.2.0)
#version: 0.8.2
# Metadata update configuration. Metadata do contain leader information
# deciding which broker to use when publishing.
#metadata:
# Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries.
#retry.max: 3
# Waiting time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms
# Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m
# The number of concurrent load-balanced Kafka output workers.
#worker: 1
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Kafka request. The default
# is 2048.
#bulk_max_size: 2048
# The number of seconds to wait for responses from the Kafka brokers before
# timing out. The default is 30s.
#timeout: 30s
# The maximum duration a broker will wait for number of required ACKs. The
# default is 10s.
#broker_timeout: 10s
# The number of messages buffered for each Kafka broker. The default is 256.
#channel_buffer_size: 256
# The keep-alive period for an active network connection. If 0s, keep-alives
# are disabled. The default is 0 seconds.
#keep_alive: 0
# Sets the output compression codec. Must be one of none, snappy and gzip. The
# default is gzip.
#compression: gzip
# The maximum permitted size of JSON-encoded messages. Bigger messages will be
# dropped. The default value is 1000000 (bytes). This value should be equal to
# or less than the broker's message.max.bytes.
#max_message_bytes: 1000000
# The ACK reliability level required from broker. 0=no response, 1=wait for
# local commit, -1=wait for all replicas to commit. The default is 1. Note:
# If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
# on error.
#required_acks: 1
# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Redis output ----------------------------------
#output.redis:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Redis servers to connect to. If load balancing is enabled, the
# events are distributed to the servers in the list. If one server becomes
# unreachable, the events are distributed to the reachable servers only.
#hosts: ["localhost:6379"]
# The Redis port to use if hosts does not contain a port number. The default
# is 6379.
#port: 6379
# The name of the Redis list or channel the events are published to. The
# default is auditbeat.
#key: auditbeat
# The password to authenticate with. The default is no authentication.
#password:
# The Redis database number where the events are published. The default is 0.
#db: 0
# The Redis data type to use for publishing events. If the data type is list,
# the Redis RPUSH command is used. If the data type is channel, the Redis
# PUBLISH command is used. The default value is list.
#datatype: list
# The number of workers to use for each host configured to publish events to
# Redis. Use this setting along with the loadbalance option. For example, if
# you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
# host).
#worker: 1
# If set to true and multiple hosts or workers are configured, the output
# plugin load balances published events onto all Redis hosts. If set to false,
# the output plugin sends all events to only one host (determined at random)
# and will switch to another host if the currently selected one becomes
# unreachable. The default value is true.
#loadbalance: true
# The Redis connection timeout in seconds. The default is 5 seconds.
#timeout: 5s
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Redis request or pipeline.
# The default is 2048.
#bulk_max_size: 2048
# The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The
# value must be a URL with a scheme of socks5://.
#proxy_url:
# This option determines whether Redis hostnames are resolved locally when
# using a proxy. The default value is false, which means that name resolution
# occurs on the proxy server.
#proxy_use_local_resolver: false
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- File output -----------------------------------
#output.file:
# Boolean flag to enable or disable the output module.
#enabled: true
# Path to the directory where to save the generated files. The option is
# mandatory.
#path: "/tmp/auditbeat"
# Name of the generated files. The default is `auditbeat` and it generates
# files: `auditbeat`, `auditbeat.1`, `auditbeat.2`, etc.
#filename: auditbeat
# Maximum size in kilobytes of each file. When this size is reached, and on
# every auditbeat restart, the files are rotated. The default value is 10240
# kB.
#rotate_every_kb: 10000
# Maximum number of files under path. When this number of files is reached,
# the oldest file is deleted and the rest are shifted from last to first. The
# default is 7 files.
#number_of_files: 7
# Permissions to use for file creation. The default is 0600.
#permissions: 0600
#----------------------------- Console output ---------------------------------
#output.console:
# Boolean flag to enable or disable the output module.
#enabled: true
# Pretty print json event
#pretty: false
#================================= Paths ======================================
# The home path for the auditbeat installation. This is the default base path
# for all other path settings and for miscellaneous files that come with the
# distribution (for example, the sample dashboards).
# If not set by a CLI flag or in the configuration file, the default for the
# home path is the location of the binary.
#path.home:
# The configuration path for the auditbeat installation. This is the default
# base path for configuration files, including the main YAML configuration file
# and the Elasticsearch template file. If not set by a CLI flag or in the
# configuration file, the default for the configuration path is the home path.
#path.config: ${path.home}
# The data path for the auditbeat installation. This is the default base path
# for all the files in which auditbeat needs to store its data. If not set by a
# CLI flag or in the configuration file, the default for the data path is a data
# subdirectory inside the home path.
#path.data: ${path.home}/data
# The logs path for a auditbeat installation. This is the default location for
# the Beat's log files. If not set by a CLI flag or in the configuration file,
# the default for the logs path is a logs subdirectory inside the home path.
#path.logs: ${path.home}/logs
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards are disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false
# The directory from where to read the dashboards. The default is the `kibana`
# folder in the home path.
#setup.dashboards.directory: ${path.home}/kibana
# The URL from where to download the dashboards archive. It is used instead of
# the directory if it has a value.
#setup.dashboards.url:
# The file archive (zip file) from where to read the dashboards. It is used instead
# of the directory when it has a value.
#setup.dashboards.file:
# In case the archive contains the dashboards from multiple Beats, this lets you
# select which one to load. You can load all the dashboards in the archive by
# setting this to the empty string.
#setup.dashboards.beat: auditbeat
# The name of the Kibana index to use for setting the configuration. Default is ".kibana"
#setup.dashboards.kibana_index: .kibana
# The Elasticsearch index name. This overwrites the index name defined in the
# dashboards and index pattern. Example: testbeat-*
#setup.dashboards.index:
# Always use the Kibana API for loading the dashboards instead of autodetecting
# how to install the dashboards by first querying Elasticsearch.
#setup.dashboards.always_kibana: false
#============================== Template =====================================
# A template is used to set the mapping in Elasticsearch
# By default template loading is enabled and the template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones.
# Set to false to disable template loading.
#setup.template.enabled: true
# Template name. By default the template name is "auditbeat-%{[beat.version]}"
# The template name and pattern has to be set in case the elasticsearch index pattern is modified.
#setup.template.name: "auditbeat-%{[beat.version]}"
# Template pattern. By default the template pattern is "-%{[beat.version]}-*" to apply to the default index settings.
# The first part is the version of the beat and then -* is used to match all daily indices.
# The template name and pattern has to be set in case the elasticsearch index pattern is modified.
#setup.template.pattern: "auditbeat-%{[beat.version]}-*"
# Path to fields.yml file to generate the template
#setup.template.fields: "${path.config}/fields.yml"
# Overwrite existing template
#setup.template.overwrite: false
# Elasticsearch template settings
setup.template.settings:
# A dictionary of settings to place into the settings.index dictionary
# of the Elasticsearch template. For more details, please check
# https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html
#index:
#number_of_shards: 1
#codec: best_compression
#number_of_routing_shards: 30
# A dictionary of settings for the _source field. For more details, please check
# https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html
#_source:
#enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
# Optional HTTP Path
#path: ""
# Use SSL settings for HTTPS. Default is true.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
#================================ Logging ======================================
# There are three options for the log output: syslog, file, stderr.
# Under Windows systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: info
# Enable debug output for selected components. To enable all selectors use ["*"]
# Other available selectors are "beat", "publish", "service"
# Multiple selectors can be chained.
#logging.selectors: [ ]
# Send all logging output to syslog. The default is false.
#logging.to_syslog: true
# If enabled, auditbeat periodically logs its internal metrics that have changed
# in the last period. For each metric that changed, the delta from the value at
# the beginning of the period is logged. Also, the total values for
# all non-zero internal metrics are logged on shutdown. The default is true.
#logging.metrics.enabled: true
# The period after which to log the internal metrics. The default is 30s.
#logging.metrics.period: 30s
# Logging to rotating files. Set logging.to_files to false to disable logging to
# files.
logging.to_files: true
logging.files:
# Configure the path where the logs are written. The default is the logs directory
# under the home path (the binary location).
#path: /var/log/auditbeat
# The name of the files where the logs are written to.
#name: auditbeat
# Configure log file size limit. If limit is reached, log file will be
# automatically rotated
#rotateeverybytes: 10485760 # = 10MB
# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7
# The permissions mask to apply when rotating log files. The default value is 0600.
# Must be a valid Unix-style file permissions mask expressed in octal notation.
#permissions: 0600
# Set to true to log messages in json format.
#logging.json: false

149
vendor/github.com/elastic/beats/auditbeat/auditbeat.yml generated vendored Normal file
View File

@ -0,0 +1,149 @@
###################### Auditbeat Configuration Example #########################
# This is an example configuration file highlighting only the most common
# options. The auditbeat.reference.yml file from the same directory contains all
# the supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/auditbeat/index.html
#========================== Modules configuration =============================
auditbeat.modules:
- module: audit
metricsets: [kernel]
kernel.audit_rules: |
## Define audit rules here.
## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
## examples or add your own rules.
## If you are on a 64 bit platform, everything should be running
## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
## because this might be a sign of someone exploiting a hole in the 32
## bit API.
#-a always,exit -F arch=b32 -S all -F key=32bit-abi
## Executions.
#-a always,exit -F arch=b64 -S execve,execveat -k exec
## External access.
#-a always,exit -F arch=b64 -S accept,bind,connect,recvfrom -F key=external-access
## Identity changes.
#-w /etc/group -p wa -k identity
#-w /etc/passwd -p wa -k identity
#-w /etc/gshadow -p wa -k identity
## Unauthorized access attempts.
#-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
#-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
- module: audit
metricsets: [file]
file.paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
#============================= Elastic Cloud ==================================
# These settings simplify using auditbeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

20
vendor/github.com/elastic/beats/auditbeat/cmd/root.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
package cmd
import (
"github.com/spf13/pflag"
"github.com/elastic/beats/metricbeat/beater"
cmd "github.com/elastic/beats/libbeat/cmd"
)
// Name of the beat (auditbeat).
const Name = "auditbeat"
// RootCmd for running auditbeat.
var RootCmd *cmd.BeatsRootCmd
func init() {
var runFlags = pflag.NewFlagSet(Name, pflag.ExitOnError)
RootCmd = cmd.GenRootCmdWithRunFlags(Name, "", beater.New, runFlags)
}

View File

@ -0,0 +1,174 @@
package datastore
import (
"io"
"os"
"sync"
"github.com/boltdb/bolt"
"github.com/elastic/beats/libbeat/paths"
)
var (
initDatastoreOnce sync.Once
ds *boltDatastore
)
// OpenBucket returns a new Bucket that stores data in {path.data}/beat.db.
// The returned Bucket must be closed when finished to ensure all resources
// are released.
func OpenBucket(name string) (Bucket, error) {
initDatastoreOnce.Do(func() {
ds = &boltDatastore{
path: paths.Resolve(paths.Data, "beat.db"),
mode: 0600,
}
})
return ds.OpenBucket(name)
}
// Datastore
type Datastore interface {
OpenBucket(name string) (Bucket, error)
}
type boltDatastore struct {
mutex sync.Mutex
useCount uint32
path string
mode os.FileMode
db *bolt.DB
}
func New(path string, mode os.FileMode) Datastore {
return &boltDatastore{path: path, mode: mode}
}
func (ds *boltDatastore) OpenBucket(bucket string) (Bucket, error) {
ds.mutex.Lock()
defer ds.mutex.Unlock()
// Initialize the Bolt DB.
if ds.db == nil {
var err error
ds.db, err = bolt.Open(ds.path, ds.mode, nil)
if err != nil {
return nil, err
}
}
// Ensure the name exists.
err := ds.db.Update(func(tx *bolt.Tx) error {
_, err := tx.CreateBucketIfNotExists([]byte(bucket))
return err
})
if err != nil {
return nil, err
}
return &boltBucket{ds, bucket}, nil
}
func (ds *boltDatastore) done() {
ds.mutex.Lock()
defer ds.mutex.Unlock()
if ds.useCount > 0 {
ds.useCount--
if ds.useCount == 0 {
ds.db.Close()
ds.db = nil
}
}
}
// Bucket
type Bucket interface {
io.Closer
Load(key string, f func(blob []byte) error) error
Store(key string, blob []byte) error
Delete(key string) error // Delete removes a key from the bucket. If the key does not exist then nothing is done and a nil error is returned.
DeleteBucket() error // Deletes and closes the bucket.
}
// BoltBucket is a Bucket that exposes some Bolt specific APIs.
type BoltBucket interface {
Bucket
View(func(tx *bolt.Bucket) error) error
Update(func(tx *bolt.Bucket) error) error
}
type boltBucket struct {
ds *boltDatastore
name string
}
func (b *boltBucket) Load(key string, f func(blob []byte) error) error {
return b.ds.db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte(b.name))
data := b.Get([]byte(key))
if data == nil {
return nil
}
return f(data)
})
}
func (b *boltBucket) Store(key string, blob []byte) error {
return b.ds.db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte(b.name))
return b.Put([]byte(key), blob)
})
}
func (b *boltBucket) ForEach(f func(key string, blob []byte) error) error {
return b.ds.db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte(b.name))
return b.ForEach(func(k, v []byte) error {
return f(string(k), v)
})
})
}
func (b *boltBucket) Delete(key string) error {
return b.ds.db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte(b.name))
return b.Delete([]byte(key))
})
}
func (b *boltBucket) DeleteBucket() error {
err := b.ds.db.Update(func(tx *bolt.Tx) error {
return tx.DeleteBucket([]byte(b.name))
})
b.Close()
return err
}
func (b *boltBucket) View(f func(*bolt.Bucket) error) error {
return b.ds.db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte(b.name))
return f(b)
})
}
func (b *boltBucket) Update(f func(*bolt.Bucket) error) error {
return b.ds.db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte(b.name))
return f(b)
})
}
func (b *boltBucket) Close() error {
b.ds.done()
b.ds = nil
return nil
}

View File

@ -0,0 +1,6 @@
[[filtering-and-enhancing-data]]
== Filter and enhance the exported data
include::../../libbeat/docs/processors.asciidoc[]
include::../../libbeat/docs/processors-using.asciidoc[]

View File

@ -0,0 +1,7 @@
[[configuration-general-options]]
== Specify general settings
You can specify settings in the +{beatname_lc}.yml+ config file to control the
general behavior of {beatname_uc}.
include::../../libbeat/docs/generalconfig.asciidoc[]

View File

@ -0,0 +1,33 @@
[id="configuration-{beatname_lc}"]
== Specify which modules to run
To enable specific modules and metricsets, you add entries to the
`auditbeat.modules` list in the +{beatname_lc}.yml+ config file. Each entry in
the list begins with a dash (-) and is followed by settings for that module.
The following example shows a configuration that runs the `audit` module with
the `kernel` and `file` metricsets enabled:
[source,yaml]
----
auditbeat.modules:
- module: audit
metricsets: [kernel]
kernel.audit_rules: |
-w /etc/passwd -p wa -k identity
-a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
- module: audit
metricsets: [file]
file.paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
----
The configuration details vary by module. See the
<<{beatname_lc}-modules,module documentation>> for more detail about
configuring the available modules and metricsets.

View File

@ -0,0 +1,77 @@
[id="configuring-howto-{beatname_lc}"]
= Configuring {beatname_uc}
[partintro]
--
Before modifying configuration settings, make sure you've completed the
<<{beatname_lc}-configuration,configuration steps>> in the Getting Started.
This section describes some common use cases for changing configuration options.
To configure {beatname_uc}, you edit the configuration file. For rpm and deb,
youll find the configuration file at +/etc/{beatname_lc}/{beatname_lc}.yml+.
There's also a full example configuration file at
+/etc/{beatname_lc}/{beatname_lc}.reference.yml+ that shows all non-deprecated
options. For mac and win, look in the archive that you extracted.
The {beatname_uc} configuration file uses http://yaml.org/[YAML] for its syntax.
See the {libbeat}/config-file-format.html[Config File Format] section of the
_Beats Platform Reference_ for more about the structure of the config file.
The following topics describe how to configure {beatname_uc}:
* <<configuration-{beatname_lc}>>
* <<configuration-general-options>>
* <<{beatname_lc}-configuration-reloading>>
* <<configuring-internal-queue>>
* <<configuring-output>>
* <<configuration-ssl>>
* <<filtering-and-enhancing-data>>
* <<configuring-ingest-node>>
* <<configuration-path>>
* <<setup-kibana-endpoint>>
* <<configuration-dashboards>>
* <<configuration-template>>
* <<configuration-logging>>
* <<using-environ-vars>>
* <<yaml-tips>>
* <<{beatname_lc}-reference-yml>>
After changing configuration settings, you need to restart {beatname_uc} to
pick up the changes.
--
include::./auditbeat-modules-config.asciidoc[]
include::./auditbeat-general-options.asciidoc[]
include::./reload-configuration.asciidoc[]
:allplatforms:
include::../../libbeat/docs/queueconfig.asciidoc[]
include::../../libbeat/docs/outputconfig.asciidoc[]
include::../../libbeat/docs/shared-ssl-config.asciidoc[]
include::./auditbeat-filtering.asciidoc[]
include::../../libbeat/docs/shared-config-ingest.asciidoc[]
include::../../libbeat/docs/shared-path-config.asciidoc[]
include::../../libbeat/docs/shared-kibana-config.asciidoc[]
include::../../libbeat/docs/setup-config.asciidoc[]
include::../../libbeat/docs/loggingconfig.asciidoc[]
:standalone:
include::../../libbeat/docs/shared-env-vars.asciidoc[]
:standalone:
:allplatforms:
include::../../libbeat/docs/yaml.asciidoc[]
include::../../libbeat/docs/reference-yml.asciidoc[]

View File

@ -0,0 +1,29 @@
[float]
[[ulimit]]
=== {beatname_uc} fails to watch folders because too many files are open?
Because of the way file monitoring is implemented on macOS, you may see a
warning similar to the following:
[source,shell]
----
eventreader_fsnotify.go:42: WARN [audit.file] Failed to watch /usr/bin: too many
open files (check the max number of open files allowed with 'ulimit -a')
----
To resolve this issue, run {beatname_uc} with the `ulimit` set to a larger
value, for example:
["source","sh",subs="attributes"]
----
sudo sh -c 'ulimit -n 8192 && ./{beatname_uc} -e
----
Or:
["source","sh",subs="attributes"]
----
sudo su
ulimit -n 8192
./{beatname_lc} -e
----

View File

@ -0,0 +1,12 @@
[[faq]]
== Frequently asked questions
This section contains frequently asked questions about {beatname_uc}. Also check
out the
https://discuss.elastic.co/c/beats/{beatname_lc}[{beatname_uc} discussion forum].
include::./faq-ulimit.asciidoc[]
include::../../libbeat/docs/faq-limit-bandwidth.asciidoc[]
include::../../libbeat/docs/shared-faq.asciidoc[]

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,294 @@
[id="{beatname_lc}-getting-started"]
== Getting started with {beatname_uc}
To get started with your own {beatname_uc} setup, install and configure these
related products:
* Elasticsearch for storage and indexing the data.
* Kibana for the UI.
* Logstash (optional) for inserting data into Elasticsearch.
See {libbeat}/getting-started.html[Getting Started with Beats and the Elastic Stack]
for more information.
After installing the Elastic Stack, read the following topics to learn how to
install, configure, and run {beatname_uc}:
* <<{beatname_lc}-installation>>
* <<{beatname_lc}-configuration>>
* <<{beatname_lc}-template>>
* <<load-kibana-dashboards>>
* <<{beatname_lc}-starting>>
* <<view-kibana-dashboards>>
* <<setup-repositories>>
[id="{beatname_lc}-installation"]
=== Step 1: Install {beatname_uc}
You should install {beatname_uc} on all the servers you want to monitor.
include::../../libbeat/docs/shared-download-and-install.asciidoc[]
[[deb]]
*deb:*
ifeval::["{release-state}"=="unreleased"]
Version {stack-version} of {beatname_uc} has not yet been released.
endif::[]
ifeval::["{release-state}"!="unreleased"]
["source","sh",subs="attributes"]
------------------------------------------------
curl -L -O https://artifacts.elastic.co/downloads/beats/{beatname_lc}/{beatname_lc}-{version}-amd64.deb
sudo dpkg -i {beatname_lc}-{version}-amd64.deb
------------------------------------------------
endif::[]
[[rpm]]
*rpm:*
ifeval::["{release-state}"=="unreleased"]
Version {stack-version} of {beatname_uc} has not yet been released.
endif::[]
ifeval::["{release-state}"!="unreleased"]
["source","sh",subs="attributes"]
------------------------------------------------
curl -L -O https://artifacts.elastic.co/downloads/beats/{beatname_lc}/{beatname_lc}-{version}-x86_64.rpm
sudo rpm -vi {beatname_lc}-{version}-x86_64.rpm
------------------------------------------------
endif::[]
[[mac]]
*mac:*
ifeval::["{release-state}"=="unreleased"]
Version {stack-version} of {beatname_uc} has not yet been released.
endif::[]
ifeval::["{release-state}"!="unreleased"]
["source","sh",subs="attributes"]
------------------------------------------------
curl -L -O https://artifacts.elastic.co/downloads/beats/{beatname_lc}/{beatname_lc}-{version}-darwin-x86_64.tar.gz
tar xzvf {beatname_lc}-{version}-darwin-x86_64.tar.gz
------------------------------------------------
endif::[]
[[docker]]
*docker:*
ifeval::["{release-state}"=="unreleased"]
Version {stack-version} of {beatname_uc} has not yet been released.
endif::[]
ifeval::["{release-state}"!="unreleased"]
["source", "shell", subs="attributes"]
------------------------------------------------
docker pull {dockerimage}
------------------------------------------------
endif::[]
[[win]]
*win:*
ifeval::["{release-state}"=="unreleased"]
Version {stack-version} of {beatname_uc} has not yet been released.
endif::[]
ifeval::["{release-state}"!="unreleased"]
. Download the {beatname_uc} Windows zip file from the
https://www.elastic.co/downloads/beats/{beatname_lc}[downloads page].
. Extract the contents of the zip file into `C:\Program Files`.
. Rename the +{beatname_lc}-<version>-windows+ directory to +{beatname_uc}+.
. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon
and select *Run As Administrator*). If you are running Windows XP, you may need
to download and install PowerShell.
. From the PowerShell prompt, run the following commands to install {beatname_uc}
as a Windows service:
+
["source","sh",subs="attributes"]
----------------------------------------------------------------------
PS > cd 'C:{backslash}Program Files{backslash}{beatname_uc}'
PS C:{backslash}Program Files{backslash}{beatname_uc}> .{backslash}install-service-{beatname_lc}.ps1
----------------------------------------------------------------------
NOTE: If script execution is disabled on your system, you need to set the
execution policy for the current session to allow the script to run. For
example: +PowerShell.exe -ExecutionPolicy UnRestricted -File
.\install-service-{beatname_lc}.ps1+.
endif::[]
Before starting {beatname_uc}, you should look at the configuration options in the
configuration file, for example +C:{backslash}Program Files{backslash}{beatname_uc}{backslash}{beatname_lc}.yml+.
For more information about these options, see
<<configuring-howto-{beatname_lc}>>.
[id="{beatname_lc}-configuration"]
=== Step 2: Configure {beatname_uc}
include::../../libbeat/docs/shared-configuring.asciidoc[]
To configure {beatname_uc}:
. Define the {beatname_uc} modules that you want to enable. {beatname_uc} uses
modules to collect the audit information. For each module, specify the
metricsets that you want to collect.
+
The following example shows the `file` metricset configured to generate
events whenever a file in one of the specified paths changes on disk:
+
["source","sh",subs="attributes"]
-------------------------------------
auditbeat.modules:
- module: audit
metricsets: [file]
file.paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
-------------------------------------
+
If you accept the default configuration without specifying additional modules,
{beatname_uc} uses a configuration that's tailored to the operating system where
{beatname_uc} is running.
+
See <<configuring-howto-{beatname_lc}>> for more details about configuring modules.
. If you are sending output to Elasticsearch (and not using Logstash), set the
IP address and port where {beatname_uc} can find the Elasticsearch installation:
+
[source,yaml]
----------------------------------------------------------------------
output.elasticsearch:
hosts: ["127.0.0.1:9200"]
----------------------------------------------------------------------
+
If you are sending output to Logstash, make sure you
<<logstash-output,Configure the Logstash output>> instead.
include::../../libbeat/docs/step-configure-kibana-endpoint.asciidoc[]
include::../../libbeat/docs/step-configure-credentials.asciidoc[]
include::../../libbeat/docs/step-test-config.asciidoc[]
include::../../libbeat/docs/step-look-at-config.asciidoc[]
[id="{beatname_lc}-template"]
=== Step 3: Load the index template in Elasticsearch
:allplatforms:
include::../../libbeat/docs/shared-template-load.asciidoc[]
[[load-kibana-dashboards]]
=== Step 4: Set up the Kibana dashboards
:allplatforms:
include::../../libbeat/docs/dashboards.asciidoc[]
[id="{beatname_lc}-starting"]
=== Step 5: Start {beatname_uc}
Run {beatname_uc} by issuing the appropriate command for your platform. If you
are accessing a secured Elasticsearch cluster, make sure you've configured
credentials as described in <<{beatname_lc}-configuration>>.
NOTE: If you use an init.d script to start {beatname_uc} on deb or rpm, you can't
specify command line flags (see <<command-line-options>>). To specify flags,
start {beatname_uc} in the foreground.
*deb:*
["source","sh",subs="attributes"]
----------------------------------------------------------------------
sudo service {beatname_lc} start
----------------------------------------------------------------------
*rpm:*
["source","sh",subs="attributes"]
----------------------------------------------------------------------
sudo service {beatname_lc} start
----------------------------------------------------------------------
*mac:*
["source","sh",subs="attributes"]
----------------------------------------------------------------------
sudo chown root {beatname_lc}.yml <1>
sudo ./{beatname_lc} -e -c {beatname_lc}.yml -d "publish"
----------------------------------------------------------------------
<1> To monitor system files, you'll be running {beatname_uc} as root, so you
need to change ownership of the configuration file, or run {beatname_uc} with
`-strict.perms=false` specified. See
{libbeat}/config-file-permissions.html[Config File Ownership and Permissions]
in the _Beats Platform Reference_.
If you see a warning about too many open files, you need to increase the
`ulimit`. See the <<ulimit,FAQ>> for more details.
*win:*
["source","sh",subs="attributes"]
----------------------------------------------------------------------
PS C:{backslash}Program Files{backslash}{beatname_uc}> Start-Service {beatname_lc}
----------------------------------------------------------------------
By default the log files are stored in +C:{backslash}ProgramData{backslash}{beatname_lc}{backslash}Logs+.
==== Test the {beatname_uc} installation
To verify that your server's statistics are present in Elasticsearch, issue
the following command:
["source","sh",subs="attributes"]
----------------------------------------------------------------------
curl -XGET 'http://localhost:9200/{beatname_lc}-*/_search?pretty'
----------------------------------------------------------------------
Make sure that you replace `localhost:9200` with the address of your
Elasticsearch instance.
On Windows, if you don't have cURL installed, simply point your browser to the
URL.
[[view-kibana-dashboards]]
=== Step 6: View the sample Kibana dashboards
To make it easier for you to start auditing the activities of users and
processes on your system, we have created example {beatname_uc} dashboards.
You loaded the dashboards earlier when you ran the `setup` command.
include::../../libbeat/docs/opendashboards.asciidoc[]
The dashboards are provided as examples. We recommend that you
{kibana-ref}/dashboard.html[customize] them to meet your needs.
image:./images/auditbeat-file-integrity-dashboard.png[Auditbeat File Integrity Dashboard]

Binary file not shown.

After

Width:  |  Height:  |  Size: 257 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

View File

@ -0,0 +1,42 @@
= Auditbeat Reference
include::../../libbeat/docs/version.asciidoc[]
include::{asciidoc-dir}/../../shared/attributes.asciidoc[]
:libbeat: http://www.elastic.co/guide/en/beats/libbeat/{doc-branch}
:kibana-ref: https://www.elastic.co/guide/en/kibana/{doc-branch}
:beatsdevguide: http://www.elastic.co/guide/en/beats/devguide/{doc-branch}
:filebeat: http://www.elastic.co/guide/en/beats/filebeat/{doc-branch}
:logstashdoc: https://www.elastic.co/guide/en/logstash/{doc-branch}
:elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/{doc-branch}
:securitydoc: https://www.elastic.co/guide/en/x-pack/{doc-branch}
:monitoringdoc: https://www.elastic.co/guide/en/x-pack/{doc-branch}
:version: {stack-version}
:beatname_lc: auditbeat
:beatname_uc: Auditbeat
:beatname_pkg: {beatname_lc}
:security: X-Pack Security
:dockerimage: docker.elastic.co/beats/{beatname_lc}:{version}
include::./overview.asciidoc[]
include::../../libbeat/docs/contributing-to-beats.asciidoc[]
include::./getting-started.asciidoc[]
include::../../libbeat/docs/repositories.asciidoc[]
include::./setting-up-running.asciidoc[]
include::./configuring-howto.asciidoc[]
include::./modules.asciidoc[]
include::./fields.asciidoc[]
include::./securing-auditbeat.asciidoc[]
include::./troubleshooting.asciidoc[]
include::./faq.asciidoc[]

View File

@ -0,0 +1,15 @@
[id="{beatname_lc}-modules"]
= Modules
[partintro]
--
This section contains detailed information about the metric collecting modules
contained in {beatname_uc}. Each module contains one or multiple metricsets. More details
about each module can be found under the links below.
//pass macro block used here to remove Edit links from modules documentation because it is generated
pass::[<?edit_url?>]
include::modules_list.asciidoc[]

View File

@ -0,0 +1,75 @@
////
This file is generated! See scripts/docs_collector.py
////
[id="{beatname_lc}-module-audit"]
== Audit Module
The `audit` module reports security-relevant information based on data captured
from the operating system (OS) or services running on the OS. Although this
feature doesnt provide additional security to your system, it does make it
easier for you to discover and track security policy violations.
[float]
=== Example configuration
The Audit module supports the common configuration options that are
described under <<configuration-{beatname_lc},configuring {beatname_uc}>>. Here
is an example configuration:
[source,yaml]
----
auditbeat.modules:
- module: audit
metricsets: [kernel]
kernel.audit_rules: |
## Define audit rules here.
## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
## examples or add your own rules.
## If you are on a 64 bit platform, everything should be running
## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
## because this might be a sign of someone exploiting a hole in the 32
## bit API.
#-a always,exit -F arch=b32 -S all -F key=32bit-abi
## Executions.
#-a always,exit -F arch=b64 -S execve,execveat -k exec
## External access.
#-a always,exit -F arch=b64 -S accept,bind,connect,recvfrom -F key=external-access
## Identity changes.
#-w /etc/group -p wa -k identity
#-w /etc/passwd -p wa -k identity
#-w /etc/gshadow -p wa -k identity
## Unauthorized access attempts.
#-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
#-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
- module: audit
metricsets: [file]
file.paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
----
[float]
=== Metricsets
The following metricsets are available:
* <<{beatname_lc}-metricset-audit-file,file>>
* <<{beatname_lc}-metricset-audit-kernel,kernel>>
include::audit/file.asciidoc[]
include::audit/kernel.asciidoc[]

View File

@ -0,0 +1,19 @@
////
This file is generated! See scripts/docs_collector.py
////
[id="{beatname_lc}-metricset-audit-file"]
include::../../../module/audit/file/_meta/docs.asciidoc[]
==== Fields
For a description of each field in the metricset, see the
<<exported-fields-audit,exported fields>> section.
Here is an example document generated by this metricset:
[source,json]
----
include::../../../module/audit/file/_meta/data.json[]
----

View File

@ -0,0 +1,19 @@
////
This file is generated! See scripts/docs_collector.py
////
[id="{beatname_lc}-metricset-audit-kernel"]
include::../../../module/audit/kernel/_meta/docs.asciidoc[]
==== Fields
For a description of each field in the metricset, see the
<<exported-fields-audit,exported fields>> section.
Here is an example document generated by this metricset:
[source,json]
----
include::../../../module/audit/kernel/_meta/data.json[]
----

View File

@ -0,0 +1,10 @@
////
This file is generated! See scripts/docs_collector.py
////
* <<{beatname_lc}-module-audit,Audit>>
--
include::modules/audit.asciidoc[]

View File

@ -0,0 +1,13 @@
[id="{beatname_lc}-overview"]
== {beatname_uc} overview
++++
<titleabbrev>Overview</titleabbrev>
++++
{beatname_uc} is a lightweight shipper that you can install on your servers to
audit the activities of users and processes on your systems. For example, you
can use {beatname_uc} to collect and centralize audit events from the Linux
Audit Framework. You can also use {beatname_uc} to detect changes to critical
files, like binaries and configuration files, and identify potential security
policy violations.

View File

@ -0,0 +1,4 @@
This functionality is experimental and may be changed or removed completely in a
future release. Elastic will take a best effort approach to fix any issues, but
experimental features are not subject to the support SLA of official GA
features.

View File

@ -0,0 +1,51 @@
[id="{beatname_lc}-configuration-reloading"]
== Reload the configuration dynamically
beta[]
You can configure {beatname_uc} to dynamically reload configuration files when
there are changes. To do this, you specify a path
(https://golang.org/pkg/path/filepath/#Glob[Glob]) to watch for module
configuration changes. When the files found by the Glob change, new modules are
started/stopped according to changes in the configuration files.
To enable dynamic config reloading, you specify the `path` and `reload` options
in the main +{beatname_lc}.yml+ config file. For example:
["source","sh"]
------------------------------------------------------------------------------
auditbeat.config.modules:
path: ${path.config}/conf.d/*.yml
reload.enabled: true
reload.period: 10s
------------------------------------------------------------------------------
*`path`*:: A Glob that defines the files to check for changes.
*`reload.enabled`*:: When set to `true`, enables dynamic config reload.
*`reload.period`*:: Specifies how often the files are checked for changes. Do not
set the `period` to less than 1s because the modification time of files is often
stored in seconds. Setting the `period` to less than 1s will result in
unnecessary overhead.
Each file found by the Glob must contain a list of one or more module
definitions. For example:
[source,yaml]
------------------------------------------------------------------------------
auditbeat.modules:
- module: audit
metricsets: [file]
file.paths:
wordpress:
- /www/wordpress
- /www/wordpress/wp-admin
- /www/wordpress/wp-content
- /www/wordpress/wp-includes
------------------------------------------------------------------------------
NOTE: On systems with POSIX file permissions, all Beats configuration files are
subject to ownership and file permission checks. If you encounter config loading
errors related to file ownership, see {libbeat}/config-file-permissions.html.

View File

@ -0,0 +1,15 @@
include::../../libbeat/docs/shared-docker.asciidoc[]
[float]
==== Special requirements
Under Docker, {beatname_uc} runs as a non-root user, but requires some privileged
capabilities to operate correctly. Ensure that the +AUDIT_CONTROL+ and +AUDIT_READ+
capabilities are available to the container.
It is also essential to run {beatname_uc} in the host PID namespace.
["source","sh",subs="attributes"]
----
docker run --cap-add=AUDIT_CONTROL,AUDIT_READ --pid=host {dockerimage}
----

View File

@ -0,0 +1,27 @@
[id="securing-{beatname_lc}"]
= Securing {beatname_uc}
[partintro]
--
The following topics describe how to secure communication between {beatname_uc}
and other products in the Elastic stack:
* <<securing-communication-elasticsearch>>
* <<configuring-ssl-logstash>>
//sets block macro for https.asciidoc included in next section
--
[[securing-communication-elasticsearch]]
== Secure communication with Elasticsearch
include::../../libbeat/docs/https.asciidoc[]
//sets block macro for shared-ssl-logstash-config.asciidoc included in next section
[[configuring-ssl-logstash]]
== Secure communication with Logstash by using SSL
include::../../libbeat/docs/shared-ssl-logstash-config.asciidoc[]

View File

@ -0,0 +1,30 @@
/////
// NOTE:
// Each beat has its own setup overview to allow for the addition of content
// that is unique to each beat.
/////
[[seting-up-and-running]]
== Setting up and running {beatname_uc}
Before reading this section, see the
<<{beatname_lc}-getting-started,getting started documentation>> for basic
installation instructions to get you started.
This section includes additional information on how to set up and run
{beatname_uc}, including:
* <<directory-layout>>
* <<command-line-options>>
* <<running-on-docker>>
//MAINTAINERS: If you add a new file to this section, make sure you update the bulleted list ^^ too.
include::../../libbeat/docs/shared-directory-layout.asciidoc[]
include::../../libbeat/docs/command-reference.asciidoc[]
include::./running-on-docker.asciidoc[]

View File

@ -0,0 +1,30 @@
[[troubleshooting]]
= Troubleshooting
[partintro]
--
If you have issues installing or running {beatname_uc}, read the
following tips:
* <<getting-help>>
* <<enable-{beatname_lc}-debugging>>
* <<faq>>
//sets block macro for getting-help.asciidoc included in next section
--
[[getting-help]]
== Get Help
include::../../libbeat/docs/getting-help.asciidoc[]
//sets block macro for debugging.asciidoc included in next section
[id="enable-{beatname_lc}-debugging"]
== Debug
include::../../libbeat/docs/debugging.asciidoc[]

17
vendor/github.com/elastic/beats/auditbeat/main.go generated vendored Normal file
View File

@ -0,0 +1,17 @@
package main
import (
"os"
"github.com/elastic/beats/auditbeat/cmd"
_ "github.com/elastic/beats/auditbeat/module/audit"
_ "github.com/elastic/beats/auditbeat/module/audit/file"
_ "github.com/elastic/beats/auditbeat/module/audit/kernel"
)
func main() {
if err := cmd.RootCmd.Execute(); err != nil {
os.Exit(1)
}
}

26
vendor/github.com/elastic/beats/auditbeat/main_test.go generated vendored Normal file
View File

@ -0,0 +1,26 @@
package main
// This file is mandatory as otherwise the auditbeat.test binary is not generated correctly.
import (
"flag"
"testing"
"github.com/elastic/beats/auditbeat/cmd"
)
var systemTest *bool
func init() {
systemTest = flag.Bool("systemTest", false, "Set to true when running system tests")
cmd.RootCmd.PersistentFlags().AddGoFlag(flag.CommandLine.Lookup("systemTest"))
cmd.RootCmd.PersistentFlags().AddGoFlag(flag.CommandLine.Lookup("test.coverprofile"))
}
// Test started when the test binary is started. Only calls main.
func TestSystem(t *testing.T) {
if *systemTest {
main()
}
}

View File

@ -0,0 +1,88 @@
{{ if eq .goos "linux" -}}
{{ if .reference -}}
# The kernel metricset collects events from the audit framework in the Linux
# kernel. You need to specify audit rules for the events that you want to audit.
{{ end -}}
- module: audit
metricsets: [kernel]
{{ if .reference -}}
kernel.resolve_ids: true
kernel.failure_mode: silent
kernel.backlog_limit: 8196
kernel.rate_limit: 0
kernel.include_raw_message: false
kernel.include_warnings: false
{{ end -}}
kernel.audit_rules: |
## Define audit rules here.
## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
## examples or add your own rules.
## If you are on a 64 bit platform, everything should be running
## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
## because this might be a sign of someone exploiting a hole in the 32
## bit API.
#-a always,exit -F arch=b32 -S all -F key=32bit-abi
## Executions.
#-a always,exit -F arch=b64 -S execve,execveat -k exec
## External access.
#-a always,exit -F arch=b64 -S accept,bind,connect,recvfrom -F key=external-access
## Identity changes.
#-w /etc/group -p wa -k identity
#-w /etc/passwd -p wa -k identity
#-w /etc/gshadow -p wa -k identity
## Unauthorized access attempts.
#-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
#-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
{{ end -}}
{{ if .reference -}}
# The file integrity metricset sends events when files are changed (created,
# updated, deleted). The events contain file metadata and hashes.
{{ end -}}
- module: audit
metricsets: [file]
{{ if eq .goos "darwin" -}}
file.paths:
- /bin
- /usr/bin
- /usr/local/bin
- /sbin
- /usr/sbin
- /usr/local/sbin
{{ else if eq .goos "windows" -}}
file.paths:
- C:/windows
- C:/windows/system32
- C:/Program Files
- C:/Program Files (x86)
{{ else -}}
file.paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
{{ end -}}
{{ if .reference }}
# Scan over the configured file paths at startup and send events for new or
# modified files since the last time Auditbeat was running.
file.scan_at_start: true
# Average scan rate. This throttles the amount of CPU and I/O that Auditbeat
# consumes at startup while scanning. Default is "50 MiB".
file.scan_rate_per_sec: 50 MiB
# Limit on the size of files that will be hashed. Default is "100 MiB".
file.max_file_size: 100 MiB
# Hash types to compute when the file changes. Supported types are md5, sha1,
# sha224, sha256, sha384, sha512, sha512_224, sha512_256, sha3_224, sha3_256,
# sha3_384 and sha3_512. Default is sha1.
file.hash_types: [sha1]
{{- end }}

View File

@ -0,0 +1,6 @@
== Audit Module
The `audit` module reports security-relevant information based on data captured
from the operating system (OS) or services running on the OS. Although this
feature doesnt provide additional security to your system, it does make it
easier for you to discover and track security policy violations.

View File

@ -0,0 +1,11 @@
- key: audit
title: Audit
short_config: true
description: >
The `audit` module reports security-relevant information based on data
captured from the operating system (OS) or services running on the OS.
fields:
- name: audit
type: group
description: >
fields:

View File

@ -0,0 +1,13 @@
{
"hits": 0,
"timeRestore": false,
"description": "",
"title": "Auditbeat - File Integrity",
"uiStateJSON": "{\"P-1\":{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}},\"P-6\":{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}},\"P-7\":{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}},\"P-8\":{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}},\"P-9\":{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}}",
"panelsJSON": "[{\"col\":1,\"id\":\"AV0tVcg6g1PYniApZa-v\",\"panelIndex\":1,\"row\":1,\"size_x\":2,\"size_y\":6,\"type\":\"visualization\"},{\"col\":3,\"id\":\"AV0tV05vg1PYniApZbA2\",\"panelIndex\":2,\"row\":1,\"size_x\":7,\"size_y\":6,\"type\":\"visualization\"},{\"col\":10,\"id\":\"AV0tWL-Yg1PYniApZbCs\",\"panelIndex\":3,\"row\":1,\"size_x\":3,\"size_y\":3,\"type\":\"visualization\"},{\"col\":10,\"id\":\"AV0tWSdXg1PYniApZbDU\",\"panelIndex\":4,\"row\":4,\"size_x\":3,\"size_y\":3,\"type\":\"visualization\"},{\"col\":5,\"id\":\"AV0tW0djg1PYniApZbGL\",\"panelIndex\":5,\"row\":9,\"size_x\":4,\"size_y\":3,\"type\":\"visualization\"},{\"col\":1,\"id\":\"AV0tY6jwg1PYniApZbRY\",\"panelIndex\":6,\"row\":7,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"col\":5,\"id\":\"AV0tav8Ag1PYniApZbbK\",\"panelIndex\":7,\"row\":7,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"col\":9,\"id\":\"AV0tbcUdg1PYniApZbe1\",\"panelIndex\":8,\"row\":7,\"size_x\":4,\"size_y\":2,\"type\":\"visualization\"},{\"size_x\":12,\"size_y\":5,\"panelIndex\":9,\"type\":\"visualization\",\"id\":\"AV0tc_xZg1PYniApZbnL\",\"col\":1,\"row\":12},{\"size_x\":4,\"size_y\":3,\"panelIndex\":10,\"type\":\"visualization\",\"id\":\"AV0tes4Eg1PYniApZbwV\",\"col\":9,\"row\":9},{\"size_x\":4,\"size_y\":3,\"panelIndex\":11,\"type\":\"visualization\",\"id\":\"AV0te0TCg1PYniApZbw9\",\"col\":1,\"row\":9}]",
"optionsJSON": "{\"darkTheme\":false}",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}}}],\"highlightAll\":true,\"version\":true}"
}
}

View File

@ -0,0 +1,10 @@
{
"visState": "{\"title\":\"Auditbeat - File - Events over time\",\"type\":\"histogram\",\"params\":{\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"}},\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"type\":\"category\",\"position\":\"bottom\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\"},\"labels\":{\"show\":true,\"truncate\":100},\"title\":{\"text\":\"@timestamp per 5 minutes\"}}],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"name\":\"LeftAxis-1\",\"type\":\"value\",\"position\":\"left\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\",\"mode\":\"normal\"},\"labels\":{\"show\":true,\"rotate\":0,\"filter\":false,\"truncate\":100},\"title\":{\"text\":\"Count\"}}],\"seriesParams\":[{\"show\":\"true\",\"type\":\"histogram\",\"mode\":\"stacked\",\"data\":{\"label\":\"Count\",\"id\":\"1\"},\"valueAxis\":\"ValueAxis-1\",\"drawLinesBetweenPoints\":true,\"showCircles\":true}],\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"times\":[],\"addTimeMarker\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"audit.file.action\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Action\"}}],\"listeners\":{}}",
"description": "",
"title": "Auditbeat - File - Events over time",
"uiStateJSON": "{}",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"auditbeat-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}

View File

@ -0,0 +1,10 @@
{
"visState": "{\"title\":\"Auditbeat - File - Action Metrics\",\"type\":\"metric\",\"params\":{\"addLegend\":false,\"addTooltip\":true,\"gauge\":{\"autoExtend\":false,\"backStyle\":\"Full\",\"colorSchema\":\"Green to Red\",\"colorsRange\":[{\"from\":0,\"to\":100}],\"gaugeColorMode\":\"None\",\"gaugeStyle\":\"Full\",\"gaugeType\":\"Metric\",\"invertColors\":false,\"labels\":{\"color\":\"black\",\"show\":true},\"orientation\":\"vertical\",\"percentageMode\":false,\"scale\":{\"color\":\"#333\",\"labels\":false,\"show\":true,\"width\":2},\"style\":{\"bgColor\":false,\"bgFill\":\"#000\",\"fontSize\":\"24\",\"labelColor\":false,\"subText\":\"\"},\"type\":\"simple\",\"useRange\":false,\"verticalSplit\":true,\"extendRange\":false},\"type\":\"gauge\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Actions\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"audit.file.action\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"description": "",
"title": "Auditbeat - File - Action Metrics",
"uiStateJSON": "{\"vis\":{\"defaultColors\":{\"0 - 100\":\"rgb(0,104,55)\"}}}",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"auditbeat-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}

View File

@ -0,0 +1,10 @@
{
"visState": "{\"title\":\"Auditbeat - File - Top updated\",\"type\":\"pie\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"audit.file.path\",\"size\":10,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Path\"}}],\"listeners\":{}}",
"description": "",
"title": "Auditbeat - File - Top updated",
"uiStateJSON": "{}",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"auditbeat-*\",\"query\":{\"query_string\":{\"query\":\"audit.file.action:updated OR audit.file.action:attributes_modified\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}

View File

@ -0,0 +1,10 @@
{
"visState": "{\n \"title\": \"Auditbeat - File - Top owners\",\n \"type\": \"pie\",\n \"params\": {\n \"addTooltip\": true,\n \"addLegend\": true,\n \"legendPosition\": \"right\",\n \"isDonut\": true\n },\n \"aggs\": [\n {\n \"id\": \"1\",\n \"enabled\": true,\n \"type\": \"count\",\n \"schema\": \"metric\",\n \"params\": {}\n },\n {\n \"id\": \"2\",\n \"enabled\": true,\n \"type\": \"terms\",\n \"schema\": \"segment\",\n \"params\": {\n \"field\": \"audit.file.owner\",\n \"size\": 5,\n \"order\": \"desc\",\n \"orderBy\": \"1\",\n \"customLabel\": \"Owner\"\n }\n }\n ],\n \"listeners\": {}\n}",
"description": "",
"title": "Auditbeat - File - Top owners",
"uiStateJSON": "{}",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\n \"index\": \"auditbeat-*\",\n \"query\": {\n \"query_string\": {\n \"query\": \"*\",\n \"analyze_wildcard\": true\n }\n },\n \"filter\": []\n}"
}
}

Some files were not shown because too many files have changed in this diff Show More