mirror of
https://github.com/Icinga/icingabeat.git
synced 2025-07-28 08:14:02 +02:00
Set libbeat version 5.1.1
This commit is contained in:
parent
ded5e8d238
commit
f2ba971175
@ -1,2 +1,2 @@
|
|||||||
:stack-version: 6.0.0-alpha1
|
:stack-version: 5.1.1
|
||||||
:doc-branch: master
|
:doc-branch: 5.1
|
||||||
|
@ -7,7 +7,7 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
"norms": false
|
"norms": false
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
114
vendor/github.com/elastic/beats/CHANGELOG.asciidoc
generated
vendored
114
vendor/github.com/elastic/beats/CHANGELOG.asciidoc
generated
vendored
@ -8,24 +8,17 @@
|
|||||||
// Template, add newest changes here
|
// Template, add newest changes here
|
||||||
|
|
||||||
=== Beats version HEAD
|
=== Beats version HEAD
|
||||||
https://github.com/elastic/beats/compare/v5.0.1...master[Check the HEAD diff]
|
https://github.com/elastic/beats/compare/v5.1.1...5.1[Check the HEAD diff]
|
||||||
|
|
||||||
==== Breaking changes
|
==== Breaking changes
|
||||||
|
|
||||||
*Affecting all Beats*
|
*Affecting all Beats*
|
||||||
|
|
||||||
*Metricbeat*
|
*Metricbeat*
|
||||||
- Change data structure of experimental haproxy module. {pull}3003[3003]
|
|
||||||
|
|
||||||
*Packetbeat*
|
*Packetbeat*
|
||||||
|
|
||||||
*Topbeat*
|
|
||||||
|
|
||||||
*Filebeat*
|
*Filebeat*
|
||||||
- If a file is falling under ignore_older during startup, offset is now set to end of file instead of 0.
|
|
||||||
With the previous logic the whole file was sent in case a line was added and it was inconsitent with
|
|
||||||
files which were harvested previously. {pull}2907[2907]
|
|
||||||
- tail_files is now only applied on the first scan and not for all new files. {pull}2932[2932]
|
|
||||||
|
|
||||||
*Winlogbeat*
|
*Winlogbeat*
|
||||||
|
|
||||||
@ -33,55 +26,27 @@ https://github.com/elastic/beats/compare/v5.0.1...master[Check the HEAD diff]
|
|||||||
==== Bugfixes
|
==== Bugfixes
|
||||||
|
|
||||||
*Affecting all Beats*
|
*Affecting all Beats*
|
||||||
- Fix empty benign errors logged by processor actions. {pull}3046[3046]
|
|
||||||
|
|
||||||
*Metricbeat*
|
*Metricbeat*
|
||||||
|
|
||||||
- Calculate the fsstat values per mounting point, and not filesystem. {pull}2777[2777]
|
|
||||||
|
|
||||||
*Packetbeat*
|
*Packetbeat*
|
||||||
|
|
||||||
*Topbeat*
|
|
||||||
|
|
||||||
*Filebeat*
|
*Filebeat*
|
||||||
- Fix registry cleanup issue when files falling under ignore_older after restart. {issue}2818[2818]
|
|
||||||
|
|
||||||
*Winlogbeat*
|
*Winlogbeat*
|
||||||
|
|
||||||
==== Added
|
==== Added
|
||||||
|
|
||||||
*Affecting all Beats*
|
*Affecting all Beats*
|
||||||
- Add add_cloud_metadata processor for collecting cloud provider metadata. {pull}2728[2728]
|
|
||||||
- Added decode_json_fields processor for decoding fields containing JSON strings. {pull}2605[2605]
|
|
||||||
|
|
||||||
*Metricbeat*
|
*Metricbeat*
|
||||||
|
|
||||||
- Add experimental filebeat metricset in the beats module. {pull}2297[2297]
|
|
||||||
- Add experimental libbeat metricset in the beats module. {pull}2339[2339]
|
|
||||||
- Add experimental docker module. Provided by Ingensi and @douaejeouit based on dockbeat.
|
|
||||||
- Add username and password config options to the MongoDB module. {pull}2889[2889]
|
|
||||||
- Add username and password config options to the PostgreSQL module. {pull}2889[2890]
|
|
||||||
- Add system core metricset for Windows. {pull}2883[2883]
|
|
||||||
- Add a sample Redis Kibana dashboard. {pull}2916[2916]
|
|
||||||
- Add support for MongoDB 3.4 and WiredTiger metrics. {pull}2999[2999]
|
|
||||||
- Add experimental kafka module with partition metricset. {pull}2969[2969]
|
|
||||||
- Add raw config option for mysql/status metricset. {pull}3001[3001]
|
|
||||||
|
|
||||||
*Packetbeat*
|
*Packetbeat*
|
||||||
|
|
||||||
*Topbeat*
|
|
||||||
|
|
||||||
*Filebeat*
|
*Filebeat*
|
||||||
- Add command line option -once to run filebeat only once and then close. {pull}2456[2456]
|
|
||||||
- Only load matching states into prospector to improve state handling {pull}2840[2840]
|
|
||||||
- Reset all states ttl on startup to make sure it is overwritten by new config {pull}2840[2840]
|
|
||||||
- Persist all states for files which fall under ignore_older to have consistent behaviour {pull}2859[2859]
|
|
||||||
- Improve shutdown behaviour with large number of files. {pull}3035[3035]
|
|
||||||
|
|
||||||
*Winlogbeat*
|
*Winlogbeat*
|
||||||
|
|
||||||
- Add `event_logs.batch_read_size` configuration option. {pull}2641[2641]
|
|
||||||
|
|
||||||
==== Deprecated
|
==== Deprecated
|
||||||
|
|
||||||
*Affecting all Beats*
|
*Affecting all Beats*
|
||||||
@ -90,14 +55,85 @@ https://github.com/elastic/beats/compare/v5.0.1...master[Check the HEAD diff]
|
|||||||
|
|
||||||
*Packetbeat*
|
*Packetbeat*
|
||||||
|
|
||||||
*Topbeat*
|
|
||||||
|
|
||||||
*Filebeat*
|
*Filebeat*
|
||||||
|
|
||||||
*Winlogbeat*
|
*Winlogbeat*
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
[[release-notes-5.1.1]]
|
||||||
|
=== Beats version 5.1.1
|
||||||
|
https://github.com/elastic/beats/compare/v5.0.2...v5.1.1[View commits]
|
||||||
|
|
||||||
|
==== Breaking changes
|
||||||
|
|
||||||
|
*Metricbeat*
|
||||||
|
|
||||||
|
- Change data structure of experimental haproxy module. {pull}3003[3003]
|
||||||
|
|
||||||
|
*Filebeat*
|
||||||
|
|
||||||
|
- If a file is falling under `ignore_older` during startup, offset is now set to end of file instead of 0.
|
||||||
|
With the previous logic the whole file was sent in case a line was added and it was inconsistent with
|
||||||
|
files which were harvested previously. {pull}2907[2907]
|
||||||
|
- `tail_files` is now only applied on the first scan and not for all new files. {pull}2932[2932]
|
||||||
|
|
||||||
|
==== Bugfixes
|
||||||
|
|
||||||
|
*Affecting all Beats*
|
||||||
|
|
||||||
|
- Fix empty benign errors logged by processor actions. {pull}3046[3046]
|
||||||
|
|
||||||
|
*Metricbeat*
|
||||||
|
|
||||||
|
- Calculate the fsstat values per mounting point, and not filesystem. {pull}2777[2777]
|
||||||
|
|
||||||
|
==== Added
|
||||||
|
|
||||||
|
*Affecting all Beats*
|
||||||
|
|
||||||
|
- Add add_cloud_metadata processor for collecting cloud provider metadata. {pull}2728[2728]
|
||||||
|
- Added decode_json_fields processor for decoding fields containing JSON strings. {pull}2605[2605]
|
||||||
|
|
||||||
|
*Metricbeat*
|
||||||
|
|
||||||
|
- Add experimental Docker module. Provided by Ingensi and @douaejeouit based on dockbeat.
|
||||||
|
- Add a sample Redis Kibana dashboard. {pull}2916[2916]
|
||||||
|
- Add support for MongoDB 3.4 and WiredTiger metrics. {pull}2999[2999]
|
||||||
|
- Add experimental kafka module with partition metricset. {pull}2969[2969]
|
||||||
|
- Add raw config option for mysql/status metricset. {pull}3001[3001]
|
||||||
|
|
||||||
|
*Filebeat*
|
||||||
|
|
||||||
|
- Add command line option `-once` to run Filebeat only once and then close. {pull}2456[2456]
|
||||||
|
- Only load matching states into prospector to improve state handling {pull}2840[2840]
|
||||||
|
- Reset all states ttl on startup to make sure it is overwritten by new config {pull}2840[2840]
|
||||||
|
- Persist all states for files which fall under `ignore_older` to have consistent behaviour {pull}2859[2859]
|
||||||
|
- Improve shutdown behaviour with large number of files. {pull}3035[3035]
|
||||||
|
|
||||||
|
*Winlogbeat*
|
||||||
|
|
||||||
|
- Add `event_logs.batch_read_size` configuration option. {pull}2641[2641]
|
||||||
|
|
||||||
|
[[release-notes-5.1.0]]
|
||||||
|
=== Beats version 5.1.0 (skipped)
|
||||||
|
|
||||||
|
Version 5.1.0 doesn't exist because, for a short period of time, the Elastic
|
||||||
|
Yum and Apt repositories included unreleased binaries labeled 5.1.0. To avoid
|
||||||
|
confusion and upgrade issues for the people that have installed these without
|
||||||
|
realizing, we decided to skip the 5.1.0 version and release 5.1.1 instead.
|
||||||
|
|
||||||
|
[[release-notes-5.0.2]]
|
||||||
|
=== Beats version 5.0.2
|
||||||
|
https://github.com/elastic/beats/compare/v5.0.1...v5.0.2[View commits]
|
||||||
|
|
||||||
|
==== Bugfixes
|
||||||
|
|
||||||
|
*Metricbeat*
|
||||||
|
|
||||||
|
- Fix the `password` option in the MongoDB module. {pull}2995[2995]
|
||||||
|
|
||||||
|
|
||||||
[[release-notes-5.0.1]]
|
[[release-notes-5.0.1]]
|
||||||
=== Beats version 5.0.1
|
=== Beats version 5.0.1
|
||||||
https://github.com/elastic/beats/compare/v5.0.0...v5.0.1[View commits]
|
https://github.com/elastic/beats/compare/v5.0.0...v5.0.1[View commits]
|
||||||
@ -125,7 +161,7 @@ https://github.com/elastic/beats/compare/v5.0.0...v5.0.1[View commits]
|
|||||||
|
|
||||||
*Metricbeat*
|
*Metricbeat*
|
||||||
|
|
||||||
- Add username and password config options to the PostgreSQL module. {pull}2889[2890]
|
- Add username and password config options to the PostgreSQL module. {pull}2890[2890]
|
||||||
- Add username and password config options to the MongoDB module. {pull}2889[2889]
|
- Add username and password config options to the MongoDB module. {pull}2889[2889]
|
||||||
- Add system core metricset for Windows. {pull}2883[2883]
|
- Add system core metricset for Windows. {pull}2883[2883]
|
||||||
|
|
||||||
|
2
vendor/github.com/elastic/beats/dev-tools/packer/version.yml
generated
vendored
2
vendor/github.com/elastic/beats/dev-tools/packer/version.yml
generated
vendored
@ -1 +1 @@
|
|||||||
version: "6.0.0-alpha1"
|
version: "5.1.1"
|
||||||
|
23
vendor/github.com/elastic/beats/filebeat/docs/load-balancing.asciidoc
generated
vendored
23
vendor/github.com/elastic/beats/filebeat/docs/load-balancing.asciidoc
generated
vendored
@ -72,26 +72,3 @@ output.logstash:
|
|||||||
loadbalance: true
|
loadbalance: true
|
||||||
bulk_max_size: 2048
|
bulk_max_size: 2048
|
||||||
-------------------------------------------------------------------------------
|
-------------------------------------------------------------------------------
|
||||||
|
|
||||||
* **Send events in parallel and asynchronously:**
|
|
||||||
+
|
|
||||||
You can configure Filebeat to send events in parallel and asynchronously by
|
|
||||||
setting `publish_async: true`. With this setting, Filebeat pushes a batch of
|
|
||||||
lines and then prepares a new batch of lines while waiting for the output to
|
|
||||||
ACK. This mode can improve load-balancing throughput, but requires the most
|
|
||||||
memory and CPU usage.
|
|
||||||
+
|
|
||||||
Example:
|
|
||||||
+
|
|
||||||
[source,yaml]
|
|
||||||
-------------------------------------------------------------------------------
|
|
||||||
filebeat.prospectors:
|
|
||||||
- input_type: log
|
|
||||||
paths:
|
|
||||||
- /var/log/*.log
|
|
||||||
filebeat.publish_async: true
|
|
||||||
output.logstash:
|
|
||||||
hosts: ["localhost:5044", "localhost:5045"]
|
|
||||||
loadbalance: true
|
|
||||||
-------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
|
@ -481,6 +481,8 @@ See <<load-balancing>> for more information about how this setting affects load
|
|||||||
|
|
||||||
===== publish_async
|
===== publish_async
|
||||||
|
|
||||||
|
experimental[]
|
||||||
|
|
||||||
If enabled, the publisher pipeline in Filebeat operates in async mode preparing
|
If enabled, the publisher pipeline in Filebeat operates in async mode preparing
|
||||||
a new batch of lines while waiting for ACK. This option can improve load-balancing
|
a new batch of lines while waiting for ACK. This option can improve load-balancing
|
||||||
throughput at the cost of increased memory usage. The default value is false.
|
throughput at the cost of increased memory usage. The default value is false.
|
||||||
|
4
vendor/github.com/elastic/beats/filebeat/docs/version.asciidoc
generated
vendored
4
vendor/github.com/elastic/beats/filebeat/docs/version.asciidoc
generated
vendored
@ -1,2 +1,2 @@
|
|||||||
:stack-version: 6.0.0-alpha1
|
:stack-version: 5.1.1
|
||||||
:doc-branch: master
|
:doc-branch: 5.1
|
||||||
|
2
vendor/github.com/elastic/beats/filebeat/filebeat.template-es2x.json
generated
vendored
2
vendor/github.com/elastic/beats/filebeat/filebeat.template-es2x.json
generated
vendored
@ -7,7 +7,7 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
2
vendor/github.com/elastic/beats/filebeat/filebeat.template.json
generated
vendored
2
vendor/github.com/elastic/beats/filebeat/filebeat.template.json
generated
vendored
@ -5,7 +5,7 @@
|
|||||||
"norms": false
|
"norms": false
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
2
vendor/github.com/elastic/beats/filebeat/publisher/publisher.go
generated
vendored
2
vendor/github.com/elastic/beats/filebeat/publisher/publisher.go
generated
vendored
@ -6,6 +6,7 @@ import (
|
|||||||
|
|
||||||
"github.com/elastic/beats/filebeat/input"
|
"github.com/elastic/beats/filebeat/input"
|
||||||
"github.com/elastic/beats/libbeat/common"
|
"github.com/elastic/beats/libbeat/common"
|
||||||
|
"github.com/elastic/beats/libbeat/logp"
|
||||||
"github.com/elastic/beats/libbeat/publisher"
|
"github.com/elastic/beats/libbeat/publisher"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -33,6 +34,7 @@ func New(
|
|||||||
pub publisher.Publisher,
|
pub publisher.Publisher,
|
||||||
) LogPublisher {
|
) LogPublisher {
|
||||||
if async {
|
if async {
|
||||||
|
logp.Warn("Using publish_async is experimental!")
|
||||||
return newAsyncLogPublisher(in, out, pub)
|
return newAsyncLogPublisher(in, out, pub)
|
||||||
}
|
}
|
||||||
return newSyncLogPublisher(in, out, pub)
|
return newSyncLogPublisher(in, out, pub)
|
||||||
|
2
vendor/github.com/elastic/beats/glide.yaml
generated
vendored
2
vendor/github.com/elastic/beats/glide.yaml
generated
vendored
@ -62,7 +62,7 @@ import:
|
|||||||
- package: github.com/miekg/dns
|
- package: github.com/miekg/dns
|
||||||
version: 5d001d020961ae1c184f9f8152fdc73810481677
|
version: 5d001d020961ae1c184f9f8152fdc73810481677
|
||||||
- package: github.com/Shopify/sarama
|
- package: github.com/Shopify/sarama
|
||||||
version: fix/sasl-handshake
|
version: enh/offset-replica-id
|
||||||
repo: https://github.com/urso/sarama
|
repo: https://github.com/urso/sarama
|
||||||
- package: github.com/rcrowley/go-metrics
|
- package: github.com/rcrowley/go-metrics
|
||||||
version: ab2277b1c5d15c3cba104e9cbddbdfc622df5ad8
|
version: ab2277b1c5d15c3cba104e9cbddbdfc622df5ad8
|
||||||
|
4
vendor/github.com/elastic/beats/heartbeat/docs/version.asciidoc
generated
vendored
4
vendor/github.com/elastic/beats/heartbeat/docs/version.asciidoc
generated
vendored
@ -1,2 +1,2 @@
|
|||||||
:stack-version: 6.0.0-alpha1
|
:stack-version: 5.1.1
|
||||||
:doc-branch: master
|
:doc-branch: 5.1
|
||||||
|
2
vendor/github.com/elastic/beats/heartbeat/heartbeat.template-es2x.json
generated
vendored
2
vendor/github.com/elastic/beats/heartbeat/heartbeat.template-es2x.json
generated
vendored
@ -7,7 +7,7 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
2
vendor/github.com/elastic/beats/heartbeat/heartbeat.template.json
generated
vendored
2
vendor/github.com/elastic/beats/heartbeat/heartbeat.template.json
generated
vendored
@ -5,7 +5,7 @@
|
|||||||
"norms": false
|
"norms": false
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
2
vendor/github.com/elastic/beats/libbeat/beat/version.go
generated
vendored
2
vendor/github.com/elastic/beats/libbeat/beat/version.go
generated
vendored
@ -1,3 +1,3 @@
|
|||||||
package beat
|
package beat
|
||||||
|
|
||||||
const defaultBeatVersion = "6.0.0-alpha1"
|
const defaultBeatVersion = "5.1.1"
|
||||||
|
2
vendor/github.com/elastic/beats/libbeat/docker-compose.yml
generated
vendored
2
vendor/github.com/elastic/beats/libbeat/docker-compose.yml
generated
vendored
@ -54,6 +54,8 @@ services:
|
|||||||
expose:
|
expose:
|
||||||
- 9092
|
- 9092
|
||||||
- 2181
|
- 2181
|
||||||
|
environment:
|
||||||
|
- ADVERTISED_HOST=kafka
|
||||||
|
|
||||||
# Overloading kibana with a simple image as it is not needed here
|
# Overloading kibana with a simple image as it is not needed here
|
||||||
kibana:
|
kibana:
|
||||||
|
82
vendor/github.com/elastic/beats/libbeat/docs/processors-config.asciidoc
generated
vendored
82
vendor/github.com/elastic/beats/libbeat/docs/processors-config.asciidoc
generated
vendored
@ -227,11 +227,12 @@ not:
|
|||||||
|
|
||||||
==== Actions
|
==== Actions
|
||||||
|
|
||||||
The supported filter actions are:
|
The supported actions are:
|
||||||
|
|
||||||
* <<include-fields,`include_fields`>>
|
* <<include-fields,`include_fields`>>
|
||||||
* <<drop-fields,`drop_fields`>>
|
* <<drop-fields,`drop_fields`>>
|
||||||
* <<drop-event,`drop_event`>>
|
* <<drop-event,`drop_event`>>
|
||||||
|
* <<add-cloud-metadata,`add_cloud_metadata`>>
|
||||||
|
|
||||||
See <<exported-fields>> for the full list of possible fields.
|
See <<exported-fields>> for the full list of possible fields.
|
||||||
|
|
||||||
@ -291,3 +292,82 @@ processors:
|
|||||||
condition
|
condition
|
||||||
------
|
------
|
||||||
|
|
||||||
|
[[add-cloud-metadata]]
|
||||||
|
===== add_cloud_metadata
|
||||||
|
|
||||||
|
The `add_cloud_metadata` action enriches each event with instance metadata from
|
||||||
|
the machine's hosting provider. At startup it will detect the hosting provider
|
||||||
|
and cache the instance metadata.
|
||||||
|
|
||||||
|
Three cloud providers are supported.
|
||||||
|
|
||||||
|
- Amazon Elastic Compute Cloud (EC2)
|
||||||
|
- Digital Ocean
|
||||||
|
- Google Compute Engine (GCE)
|
||||||
|
|
||||||
|
The simple configuration below enables the processor.
|
||||||
|
|
||||||
|
[source,yaml]
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
processors:
|
||||||
|
- add_cloud_metadata:
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
The `add_cloud_metadata` action has one optional configuration setting named
|
||||||
|
`timeout` that specifies the maximum amount of time to wait for a successful
|
||||||
|
response when detecting the hosting provider. The default timeout value is `3s`.
|
||||||
|
If a timeout occurs then no instance metadata will be added to the events. This
|
||||||
|
makes it possible to enable this processor for all your deployments (in the
|
||||||
|
cloud or on-premise).
|
||||||
|
|
||||||
|
The metadata that is added to events varies by hosting provider. Below are
|
||||||
|
examples for each of the supported providers.
|
||||||
|
|
||||||
|
_EC2_
|
||||||
|
|
||||||
|
[source,json]
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"cloud": {
|
||||||
|
"availability_zone": "us-east-1c",
|
||||||
|
"instance_id": "i-4e123456",
|
||||||
|
"machine_type": "t2.medium",
|
||||||
|
"provider": "ec2",
|
||||||
|
"region": "us-east-1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
_Digital Ocean_
|
||||||
|
|
||||||
|
[source,json]
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"cloud": {
|
||||||
|
"instance_id": "1234567",
|
||||||
|
"provider": "digitalocean",
|
||||||
|
"region": "nyc2"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
_GCE_
|
||||||
|
|
||||||
|
[source,json]
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"cloud": {
|
||||||
|
"availability_zone": "projects/1234567890/zones/us-east1-b",
|
||||||
|
"instance_id": "1234556778987654321",
|
||||||
|
"machine_type": "projects/1234567890/machineTypes/f1-micro",
|
||||||
|
"project_id": "my-dev",
|
||||||
|
"provider": "gce"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
2
vendor/github.com/elastic/beats/libbeat/docs/release.asciidoc
generated
vendored
2
vendor/github.com/elastic/beats/libbeat/docs/release.asciidoc
generated
vendored
@ -6,6 +6,8 @@
|
|||||||
--
|
--
|
||||||
This section summarizes the changes in each release.
|
This section summarizes the changes in each release.
|
||||||
|
|
||||||
|
* <<release-notes-5.1.1>>
|
||||||
|
* <<release-notes-5.1.0>>
|
||||||
* <<release-notes-5.0.1>>
|
* <<release-notes-5.0.1>>
|
||||||
* <<release-notes-5.0.0>>
|
* <<release-notes-5.0.0>>
|
||||||
* <<release-notes-5.0.0-ga>>
|
* <<release-notes-5.0.0-ga>>
|
||||||
|
4
vendor/github.com/elastic/beats/libbeat/docs/version.asciidoc
generated
vendored
4
vendor/github.com/elastic/beats/libbeat/docs/version.asciidoc
generated
vendored
@ -1,2 +1,2 @@
|
|||||||
:stack-version: 6.0.0-alpha1
|
:stack-version: 5.1.1
|
||||||
:doc-branch: master
|
:doc-branch: 5.1
|
||||||
|
27
vendor/github.com/elastic/beats/libbeat/outputs/elasticsearch/client.go
generated
vendored
27
vendor/github.com/elastic/beats/libbeat/outputs/elasticsearch/client.go
generated
vendored
@ -104,11 +104,23 @@ func NewClient(
|
|||||||
pipeline = nil
|
pipeline = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
u, err := url.Parse(s.URL)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse elasticsearch URL: %v", err)
|
||||||
|
}
|
||||||
|
if u.User != nil {
|
||||||
|
s.Username = u.User.Username()
|
||||||
|
s.Password, _ = u.User.Password()
|
||||||
|
u.User = nil
|
||||||
|
|
||||||
|
// Re-write URL without credentials.
|
||||||
|
s.URL = u.String()
|
||||||
|
}
|
||||||
|
|
||||||
logp.Info("Elasticsearch url: %s", s.URL)
|
logp.Info("Elasticsearch url: %s", s.URL)
|
||||||
|
|
||||||
// TODO: add socks5 proxy support
|
// TODO: add socks5 proxy support
|
||||||
var dialer, tlsDialer transport.Dialer
|
var dialer, tlsDialer transport.Dialer
|
||||||
var err error
|
|
||||||
|
|
||||||
dialer = transport.NetDialer(s.Timeout)
|
dialer = transport.NetDialer(s.Timeout)
|
||||||
tlsDialer, err = transport.TLSDialer(dialer, s.TLS, s.Timeout)
|
tlsDialer, err = transport.TLSDialer(dialer, s.TLS, s.Timeout)
|
||||||
@ -142,19 +154,6 @@ func NewClient(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
u, err := url.Parse(s.URL)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to parse elasticsearch URL: %v", err)
|
|
||||||
}
|
|
||||||
if u.User != nil {
|
|
||||||
s.Username = u.User.Username()
|
|
||||||
s.Password, _ = u.User.Password()
|
|
||||||
u.User = nil
|
|
||||||
|
|
||||||
// Re-write URL without credentials.
|
|
||||||
s.URL = u.String()
|
|
||||||
}
|
|
||||||
|
|
||||||
client := &Client{
|
client := &Client{
|
||||||
Connection: Connection{
|
Connection: Connection{
|
||||||
URL: s.URL,
|
URL: s.URL,
|
||||||
|
6
vendor/github.com/elastic/beats/libbeat/processors/actions/decode_json_fields.go
generated
vendored
6
vendor/github.com/elastic/beats/libbeat/processors/actions/decode_json_fields.go
generated
vendored
@ -21,8 +21,8 @@ type decodeJSONFields struct {
|
|||||||
|
|
||||||
type config struct {
|
type config struct {
|
||||||
Fields []string `config:"fields"`
|
Fields []string `config:"fields"`
|
||||||
MaxDepth int `config:"maxDepth" validate:"min=1"`
|
MaxDepth int `config:"max_depth" validate:"min=1"`
|
||||||
ProcessArray bool `config:"processArray"`
|
ProcessArray bool `config:"process_array"`
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -38,7 +38,7 @@ func init() {
|
|||||||
processors.RegisterPlugin("decode_json_fields",
|
processors.RegisterPlugin("decode_json_fields",
|
||||||
configChecked(newDecodeJSONFields,
|
configChecked(newDecodeJSONFields,
|
||||||
requireFields("fields"),
|
requireFields("fields"),
|
||||||
allowedFields("fields", "maxDepth", "processArray")))
|
allowedFields("fields", "max_depth", "process_array")))
|
||||||
}
|
}
|
||||||
|
|
||||||
func newDecodeJSONFields(c common.Config) (processors.Processor, error) {
|
func newDecodeJSONFields(c common.Config) (processors.Processor, error) {
|
||||||
|
@ -90,8 +90,8 @@ func TestValidJSONDepthTwo(t *testing.T) {
|
|||||||
|
|
||||||
testConfig, _ = common.NewConfigFrom(map[string]interface{}{
|
testConfig, _ = common.NewConfigFrom(map[string]interface{}{
|
||||||
"fields": fields,
|
"fields": fields,
|
||||||
"processArray": false,
|
"process_array": false,
|
||||||
"maxDepth": 2,
|
"max_depth": 2,
|
||||||
})
|
})
|
||||||
|
|
||||||
actual := getActualValue(t, testConfig, input)
|
actual := getActualValue(t, testConfig, input)
|
||||||
|
20
vendor/github.com/elastic/beats/metricbeat/_meta/beat.full.yml
generated
vendored
20
vendor/github.com/elastic/beats/metricbeat/_meta/beat.full.yml
generated
vendored
@ -94,6 +94,26 @@ metricbeat.modules:
|
|||||||
#period: 10s
|
#period: 10s
|
||||||
#hosts: ["localhost:9092"]
|
#hosts: ["localhost:9092"]
|
||||||
|
|
||||||
|
#client_id: metricbeat
|
||||||
|
#retries: 3
|
||||||
|
#backoff: 250ms
|
||||||
|
|
||||||
|
# List of Topics to query metadata for. If empty, all topics will be queried.
|
||||||
|
#topics: []
|
||||||
|
|
||||||
|
# Optional SSL. By default is off.
|
||||||
|
# List of root certificates for HTTPS server verifications
|
||||||
|
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||||
|
|
||||||
|
# Certificate for SSL client authentication
|
||||||
|
#ssl.certificate: "/etc/pki/client/cert.pem"
|
||||||
|
|
||||||
|
# Client Certificate Key
|
||||||
|
#ssl.key: "/etc/pki/client/cert.key"
|
||||||
|
|
||||||
|
# SASL authentication
|
||||||
|
#username: ""
|
||||||
|
#password: ""
|
||||||
|
|
||||||
#------------------------------- MongoDB Module ------------------------------
|
#------------------------------- MongoDB Module ------------------------------
|
||||||
#- module: mongodb
|
#- module: mongodb
|
||||||
|
@ -1,10 +1,10 @@
|
|||||||
{
|
{
|
||||||
"visState": "{\"title\":\"Docker CPU usage\",\"type\":\"area\",\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"defaultYExtents\":false,\"interpolate\":\"linear\",\"legendPosition\":\"top\",\"mode\":\"stacked\",\"scale\":\"linear\",\"setYExtents\":false,\"shareYAxis\":true,\"smoothLines\":true,\"times\":[],\"yAxis\":{}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"percentiles\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.cpu.usage.total\",\"percents\":[75],\"customLabel\":\"Total CPU time\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"docker.container.name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1.75\",\"customLabel\":\"Container name\"}}],\"listeners\":{}}",
|
"visState": "{\n \"title\": \"Docker CPU usage\",\n \"type\": \"area\",\n \"params\": {\n \"addLegend\": true,\n \"addTimeMarker\": false,\n \"addTooltip\": true,\n \"defaultYExtents\": false,\n \"interpolate\": \"linear\",\n \"legendPosition\": \"top\",\n \"mode\": \"stacked\",\n \"scale\": \"linear\",\n \"setYExtents\": false,\n \"shareYAxis\": true,\n \"smoothLines\": true,\n \"times\": [],\n \"yAxis\": {}\n },\n \"aggs\": [\n {\n \"id\": \"1\",\n \"enabled\": true,\n \"type\": \"percentiles\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.cpu.total.pct\",\n \"percents\": [\n 75\n ],\n \"customLabel\": \"Total CPU time\"\n }\n },\n {\n \"id\": \"2\",\n \"enabled\": true,\n \"type\": \"date_histogram\",\n \"schema\": \"segment\",\n \"params\": {\n \"field\": \"@timestamp\",\n \"interval\": \"auto\",\n \"customInterval\": \"2h\",\n \"min_doc_count\": 1,\n \"extended_bounds\": {}\n }\n },\n {\n \"id\": \"3\",\n \"enabled\": true,\n \"type\": \"terms\",\n \"schema\": \"group\",\n \"params\": {\n \"field\": \"docker.container.name\",\n \"size\": 5,\n \"order\": \"desc\",\n \"orderBy\": \"1.75\",\n \"customLabel\": \"Container name\"\n }\n }\n ],\n \"listeners\": {}\n}",
|
||||||
"description": "",
|
"description": "",
|
||||||
"title": "Docker CPU usage",
|
"title": "Docker CPU usage",
|
||||||
"uiStateJSON": "{}",
|
"uiStateJSON": "{}",
|
||||||
"version": 1,
|
"version": 1,
|
||||||
"kibanaSavedObjectMeta": {
|
"kibanaSavedObjectMeta": {
|
||||||
"searchSourceJSON": "{\"filter\":[],\"index\":\"metricbeat-*\",\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"require_field_match\":false,\"fragment_size\":2147483647},\"query\":{\"query_string\":{\"query\":\"metricset.module:docker AND metricset.name:cpu\",\"analyze_wildcard\":true}}}"
|
"searchSourceJSON": "{\n \"filter\": [],\n \"index\": \"metricbeat-*\",\n \"highlight\": {\n \"pre_tags\": [\n \"@kibana-highlighted-field@\"\n ],\n \"post_tags\": [\n \"@/kibana-highlighted-field@\"\n ],\n \"fields\": {\n \"*\": {}\n },\n \"require_field_match\": false,\n \"fragment_size\": 2147483647\n },\n \"query\": {\n \"query_string\": {\n \"query\": \"metricset.module:docker AND metricset.name:cpu\",\n \"analyze_wildcard\": true\n }\n }\n}"
|
||||||
}
|
}
|
||||||
}
|
}
|
@ -1,11 +1,11 @@
|
|||||||
{
|
{
|
||||||
"visState": "{\"title\":\"Docker containers\",\"type\":\"table\",\"params\":{\"perPage\":8,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":true,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"docker.container.name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Name\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.cpu.usage.total\",\"customLabel\":\"CPU usage\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.diskio.total\",\"customLabel\":\"DiskIO\"}},{\"id\":\"5\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.memory.usage.pct\",\"customLabel\":\"Mem (%)\"}},{\"id\":\"6\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.memory.rss.total\",\"customLabel\":\"Mem RSS\"}},{\"id\":\"1\",\"enabled\":true,\"type\":\"cardinality\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.container.id\",\"customLabel\":\"Number of Containers\"}}],\"listeners\":{}}",
|
"visState": "{\n \"title\": \"Docker containers\",\n \"type\": \"table\",\n \"params\": {\n \"perPage\": 8,\n \"showMeticsAtAllLevels\": false,\n \"showPartialRows\": false,\n \"showTotal\": true,\n \"sort\": {\n \"columnIndex\": null,\n \"direction\": null\n },\n \"totalFunc\": \"sum\"\n },\n \"aggs\": [\n {\n \"id\": \"2\",\n \"enabled\": true,\n \"type\": \"terms\",\n \"schema\": \"bucket\",\n \"params\": {\n \"field\": \"docker.container.name\",\n \"size\": 5,\n \"order\": \"desc\",\n \"orderBy\": \"1\",\n \"customLabel\": \"Name\"\n }\n },\n {\n \"id\": \"3\",\n \"enabled\": true,\n \"type\": \"max\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.cpu.total.pct\",\n \"customLabel\": \"CPU usage (%)\"\n }\n },\n {\n \"id\": \"4\",\n \"enabled\": true,\n \"type\": \"max\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.diskio.total\",\n \"customLabel\": \"DiskIO\"\n }\n },\n {\n \"id\": \"5\",\n \"enabled\": true,\n \"type\": \"max\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.memory.usage.pct\",\n \"customLabel\": \"Mem (%)\"\n }\n },\n {\n \"id\": \"6\",\n \"enabled\": true,\n \"type\": \"max\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.memory.rss.total\",\n \"customLabel\": \"Mem RSS\"\n }\n },\n {\n \"id\": \"1\",\n \"enabled\": true,\n \"type\": \"cardinality\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.container.id\",\n \"customLabel\": \"Number of Containers\"\n }\n }\n ],\n \"listeners\": {}\n}",
|
||||||
"description": "",
|
"description": "",
|
||||||
"title": "Docker containers",
|
"title": "Docker containers",
|
||||||
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":1,\"direction\":\"asc\"}}}}",
|
"uiStateJSON": "{\n \"vis\": {\n \"params\": {\n \"sort\": {\n \"columnIndex\": 1,\n \"direction\": \"asc\"\n }\n }\n }\n}",
|
||||||
"version": 1,
|
"version": 1,
|
||||||
"savedSearchId": "Metricbeat-Docker",
|
"savedSearchId": "Metricbeat-Docker",
|
||||||
"kibanaSavedObjectMeta": {
|
"kibanaSavedObjectMeta": {
|
||||||
"searchSourceJSON": "{\"filter\":[]}"
|
"searchSourceJSON": "{\n \"filter\": []\n}"
|
||||||
}
|
}
|
||||||
}
|
}
|
59
vendor/github.com/elastic/beats/metricbeat/docs/fields.asciidoc
generated
vendored
59
vendor/github.com/elastic/beats/metricbeat/docs/fields.asciidoc
generated
vendored
@ -1959,7 +1959,14 @@ Oldest offset of the partition.
|
|||||||
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== kafka.partition.partition
|
== partition Fields
|
||||||
|
|
||||||
|
Partition data.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
[float]
|
||||||
|
=== kafka.partition.partition.id
|
||||||
|
|
||||||
type: long
|
type: long
|
||||||
|
|
||||||
@ -1967,7 +1974,55 @@ Partition id.
|
|||||||
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== kafka.partition.topic
|
=== kafka.partition.partition.leader
|
||||||
|
|
||||||
|
type: long
|
||||||
|
|
||||||
|
Leader id (broker).
|
||||||
|
|
||||||
|
|
||||||
|
[float]
|
||||||
|
=== kafka.partition.partition.isr
|
||||||
|
|
||||||
|
type: list
|
||||||
|
|
||||||
|
List of isr ids.
|
||||||
|
|
||||||
|
|
||||||
|
[float]
|
||||||
|
=== kafka.partition.partition.replica
|
||||||
|
|
||||||
|
type: long
|
||||||
|
|
||||||
|
Replica id (broker).
|
||||||
|
|
||||||
|
|
||||||
|
[float]
|
||||||
|
=== kafka.partition.partition.insync_replica
|
||||||
|
|
||||||
|
type: boolean
|
||||||
|
|
||||||
|
Indicates if replica is included in the in-sync replicate set (ISR).
|
||||||
|
|
||||||
|
|
||||||
|
[float]
|
||||||
|
=== kafka.partition.partition.error.code
|
||||||
|
|
||||||
|
type: long
|
||||||
|
|
||||||
|
Error code from fetching partition.
|
||||||
|
|
||||||
|
|
||||||
|
[float]
|
||||||
|
=== kafka.partition.topic.error.code
|
||||||
|
|
||||||
|
type: long
|
||||||
|
|
||||||
|
topic error code.
|
||||||
|
|
||||||
|
|
||||||
|
[float]
|
||||||
|
=== kafka.partition.topic.name
|
||||||
|
|
||||||
type: keyword
|
type: keyword
|
||||||
|
|
||||||
|
20
vendor/github.com/elastic/beats/metricbeat/docs/modules/kafka.asciidoc
generated
vendored
20
vendor/github.com/elastic/beats/metricbeat/docs/modules/kafka.asciidoc
generated
vendored
@ -24,6 +24,26 @@ metricbeat.modules:
|
|||||||
#period: 10s
|
#period: 10s
|
||||||
#hosts: ["localhost:9092"]
|
#hosts: ["localhost:9092"]
|
||||||
|
|
||||||
|
#client_id: metricbeat
|
||||||
|
#retries: 3
|
||||||
|
#backoff: 250ms
|
||||||
|
|
||||||
|
# List of Topics to query metadata for. If empty, all topics will be queried.
|
||||||
|
#topics: []
|
||||||
|
|
||||||
|
# Optional SSL. By default is off.
|
||||||
|
# List of root certificates for HTTPS server verifications
|
||||||
|
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||||
|
|
||||||
|
# Certificate for SSL client authentication
|
||||||
|
#ssl.certificate: "/etc/pki/client/cert.pem"
|
||||||
|
|
||||||
|
# Client Certificate Key
|
||||||
|
#ssl.key: "/etc/pki/client/cert.key"
|
||||||
|
|
||||||
|
# SASL authentication
|
||||||
|
#username: ""
|
||||||
|
#password: ""
|
||||||
----
|
----
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
|
4
vendor/github.com/elastic/beats/metricbeat/docs/version.asciidoc
generated
vendored
4
vendor/github.com/elastic/beats/metricbeat/docs/version.asciidoc
generated
vendored
@ -1,2 +1,2 @@
|
|||||||
:stack-version: 6.0.0-alpha1
|
:stack-version: 5.1.1
|
||||||
:doc-branch: master
|
:doc-branch: 5.1
|
||||||
|
20
vendor/github.com/elastic/beats/metricbeat/metricbeat.full.yml
generated
vendored
20
vendor/github.com/elastic/beats/metricbeat/metricbeat.full.yml
generated
vendored
@ -94,6 +94,26 @@ metricbeat.modules:
|
|||||||
#period: 10s
|
#period: 10s
|
||||||
#hosts: ["localhost:9092"]
|
#hosts: ["localhost:9092"]
|
||||||
|
|
||||||
|
#client_id: metricbeat
|
||||||
|
#retries: 3
|
||||||
|
#backoff: 250ms
|
||||||
|
|
||||||
|
# List of Topics to query metadata for. If empty, all topics will be queried.
|
||||||
|
#topics: []
|
||||||
|
|
||||||
|
# Optional SSL. By default is off.
|
||||||
|
# List of root certificates for HTTPS server verifications
|
||||||
|
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||||
|
|
||||||
|
# Certificate for SSL client authentication
|
||||||
|
#ssl.certificate: "/etc/pki/client/cert.pem"
|
||||||
|
|
||||||
|
# Client Certificate Key
|
||||||
|
#ssl.key: "/etc/pki/client/cert.key"
|
||||||
|
|
||||||
|
# SASL authentication
|
||||||
|
#username: ""
|
||||||
|
#password: ""
|
||||||
|
|
||||||
#------------------------------- MongoDB Module ------------------------------
|
#------------------------------- MongoDB Module ------------------------------
|
||||||
#- module: mongodb
|
#- module: mongodb
|
||||||
|
33
vendor/github.com/elastic/beats/metricbeat/metricbeat.template-es2x.json
generated
vendored
33
vendor/github.com/elastic/beats/metricbeat/metricbeat.template-es2x.json
generated
vendored
@ -7,7 +7,7 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
@ -950,9 +950,38 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"partition": {
|
"partition": {
|
||||||
|
"properties": {
|
||||||
|
"error": {
|
||||||
|
"properties": {
|
||||||
|
"code": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"id": {
|
||||||
"type": "long"
|
"type": "long"
|
||||||
},
|
},
|
||||||
|
"insync_replica": {
|
||||||
|
"type": "boolean"
|
||||||
|
},
|
||||||
|
"leader": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"replica": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"topic": {
|
"topic": {
|
||||||
|
"properties": {
|
||||||
|
"error": {
|
||||||
|
"properties": {
|
||||||
|
"code": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
"ignore_above": 1024,
|
"ignore_above": 1024,
|
||||||
"index": "not_analyzed",
|
"index": "not_analyzed",
|
||||||
"type": "string"
|
"type": "string"
|
||||||
@ -960,6 +989,8 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
},
|
},
|
||||||
"meta": {
|
"meta": {
|
||||||
"properties": {
|
"properties": {
|
||||||
|
33
vendor/github.com/elastic/beats/metricbeat/metricbeat.template.json
generated
vendored
33
vendor/github.com/elastic/beats/metricbeat/metricbeat.template.json
generated
vendored
@ -5,7 +5,7 @@
|
|||||||
"norms": false
|
"norms": false
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
@ -957,15 +957,46 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"partition": {
|
"partition": {
|
||||||
|
"properties": {
|
||||||
|
"error": {
|
||||||
|
"properties": {
|
||||||
|
"code": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"id": {
|
||||||
"type": "long"
|
"type": "long"
|
||||||
},
|
},
|
||||||
|
"insync_replica": {
|
||||||
|
"type": "boolean"
|
||||||
|
},
|
||||||
|
"leader": {
|
||||||
|
"type": "long"
|
||||||
|
},
|
||||||
|
"replica": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"topic": {
|
"topic": {
|
||||||
|
"properties": {
|
||||||
|
"error": {
|
||||||
|
"properties": {
|
||||||
|
"code": {
|
||||||
|
"type": "long"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
"ignore_above": 1024,
|
"ignore_above": 1024,
|
||||||
"type": "keyword"
|
"type": "keyword"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
},
|
},
|
||||||
"meta": {
|
"meta": {
|
||||||
"properties": {
|
"properties": {
|
||||||
|
@ -1,10 +1,10 @@
|
|||||||
{
|
{
|
||||||
"visState": "{\"title\":\"Docker CPU usage\",\"type\":\"area\",\"params\":{\"addLegend\":true,\"addTimeMarker\":false,\"addTooltip\":true,\"defaultYExtents\":false,\"interpolate\":\"linear\",\"legendPosition\":\"top\",\"mode\":\"stacked\",\"scale\":\"linear\",\"setYExtents\":false,\"shareYAxis\":true,\"smoothLines\":true,\"times\":[],\"yAxis\":{}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"percentiles\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.cpu.usage.total\",\"percents\":[75],\"customLabel\":\"Total CPU time\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"docker.container.name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1.75\",\"customLabel\":\"Container name\"}}],\"listeners\":{}}",
|
"visState": "{\n \"title\": \"Docker CPU usage\",\n \"type\": \"area\",\n \"params\": {\n \"addLegend\": true,\n \"addTimeMarker\": false,\n \"addTooltip\": true,\n \"defaultYExtents\": false,\n \"interpolate\": \"linear\",\n \"legendPosition\": \"top\",\n \"mode\": \"stacked\",\n \"scale\": \"linear\",\n \"setYExtents\": false,\n \"shareYAxis\": true,\n \"smoothLines\": true,\n \"times\": [],\n \"yAxis\": {}\n },\n \"aggs\": [\n {\n \"id\": \"1\",\n \"enabled\": true,\n \"type\": \"percentiles\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.cpu.total.pct\",\n \"percents\": [\n 75\n ],\n \"customLabel\": \"Total CPU time\"\n }\n },\n {\n \"id\": \"2\",\n \"enabled\": true,\n \"type\": \"date_histogram\",\n \"schema\": \"segment\",\n \"params\": {\n \"field\": \"@timestamp\",\n \"interval\": \"auto\",\n \"customInterval\": \"2h\",\n \"min_doc_count\": 1,\n \"extended_bounds\": {}\n }\n },\n {\n \"id\": \"3\",\n \"enabled\": true,\n \"type\": \"terms\",\n \"schema\": \"group\",\n \"params\": {\n \"field\": \"docker.container.name\",\n \"size\": 5,\n \"order\": \"desc\",\n \"orderBy\": \"1.75\",\n \"customLabel\": \"Container name\"\n }\n }\n ],\n \"listeners\": {}\n}",
|
||||||
"description": "",
|
"description": "",
|
||||||
"title": "Docker CPU usage",
|
"title": "Docker CPU usage",
|
||||||
"uiStateJSON": "{}",
|
"uiStateJSON": "{}",
|
||||||
"version": 1,
|
"version": 1,
|
||||||
"kibanaSavedObjectMeta": {
|
"kibanaSavedObjectMeta": {
|
||||||
"searchSourceJSON": "{\"filter\":[],\"index\":\"metricbeat-*\",\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"require_field_match\":false,\"fragment_size\":2147483647},\"query\":{\"query_string\":{\"query\":\"metricset.module:docker AND metricset.name:cpu\",\"analyze_wildcard\":true}}}"
|
"searchSourceJSON": "{\n \"filter\": [],\n \"index\": \"metricbeat-*\",\n \"highlight\": {\n \"pre_tags\": [\n \"@kibana-highlighted-field@\"\n ],\n \"post_tags\": [\n \"@/kibana-highlighted-field@\"\n ],\n \"fields\": {\n \"*\": {}\n },\n \"require_field_match\": false,\n \"fragment_size\": 2147483647\n },\n \"query\": {\n \"query_string\": {\n \"query\": \"metricset.module:docker AND metricset.name:cpu\",\n \"analyze_wildcard\": true\n }\n }\n}"
|
||||||
}
|
}
|
||||||
}
|
}
|
@ -1,11 +1,11 @@
|
|||||||
{
|
{
|
||||||
"visState": "{\"title\":\"Docker containers\",\"type\":\"table\",\"params\":{\"perPage\":8,\"showMeticsAtAllLevels\":false,\"showPartialRows\":false,\"showTotal\":true,\"sort\":{\"columnIndex\":null,\"direction\":null},\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"docker.container.name\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"customLabel\":\"Name\"}},{\"id\":\"3\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.cpu.usage.total\",\"customLabel\":\"CPU usage\"}},{\"id\":\"4\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.diskio.total\",\"customLabel\":\"DiskIO\"}},{\"id\":\"5\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.memory.usage.pct\",\"customLabel\":\"Mem (%)\"}},{\"id\":\"6\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.memory.rss.total\",\"customLabel\":\"Mem RSS\"}},{\"id\":\"1\",\"enabled\":true,\"type\":\"cardinality\",\"schema\":\"metric\",\"params\":{\"field\":\"docker.container.id\",\"customLabel\":\"Number of Containers\"}}],\"listeners\":{}}",
|
"visState": "{\n \"title\": \"Docker containers\",\n \"type\": \"table\",\n \"params\": {\n \"perPage\": 8,\n \"showMeticsAtAllLevels\": false,\n \"showPartialRows\": false,\n \"showTotal\": true,\n \"sort\": {\n \"columnIndex\": null,\n \"direction\": null\n },\n \"totalFunc\": \"sum\"\n },\n \"aggs\": [\n {\n \"id\": \"2\",\n \"enabled\": true,\n \"type\": \"terms\",\n \"schema\": \"bucket\",\n \"params\": {\n \"field\": \"docker.container.name\",\n \"size\": 5,\n \"order\": \"desc\",\n \"orderBy\": \"1\",\n \"customLabel\": \"Name\"\n }\n },\n {\n \"id\": \"3\",\n \"enabled\": true,\n \"type\": \"max\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.cpu.total.pct\",\n \"customLabel\": \"CPU usage (%)\"\n }\n },\n {\n \"id\": \"4\",\n \"enabled\": true,\n \"type\": \"max\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.diskio.total\",\n \"customLabel\": \"DiskIO\"\n }\n },\n {\n \"id\": \"5\",\n \"enabled\": true,\n \"type\": \"max\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.memory.usage.pct\",\n \"customLabel\": \"Mem (%)\"\n }\n },\n {\n \"id\": \"6\",\n \"enabled\": true,\n \"type\": \"max\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.memory.rss.total\",\n \"customLabel\": \"Mem RSS\"\n }\n },\n {\n \"id\": \"1\",\n \"enabled\": true,\n \"type\": \"cardinality\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"docker.container.id\",\n \"customLabel\": \"Number of Containers\"\n }\n }\n ],\n \"listeners\": {}\n}",
|
||||||
"description": "",
|
"description": "",
|
||||||
"title": "Docker containers",
|
"title": "Docker containers",
|
||||||
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":1,\"direction\":\"asc\"}}}}",
|
"uiStateJSON": "{\n \"vis\": {\n \"params\": {\n \"sort\": {\n \"columnIndex\": 1,\n \"direction\": \"asc\"\n }\n }\n }\n}",
|
||||||
"version": 1,
|
"version": 1,
|
||||||
"savedSearchId": "Metricbeat-Docker",
|
"savedSearchId": "Metricbeat-Docker",
|
||||||
"kibanaSavedObjectMeta": {
|
"kibanaSavedObjectMeta": {
|
||||||
"searchSourceJSON": "{\"filter\":[]}"
|
"searchSourceJSON": "{\n \"filter\": []\n}"
|
||||||
}
|
}
|
||||||
}
|
}
|
20
vendor/github.com/elastic/beats/metricbeat/module/kafka/_meta/config.yml
generated
vendored
20
vendor/github.com/elastic/beats/metricbeat/module/kafka/_meta/config.yml
generated
vendored
@ -4,3 +4,23 @@
|
|||||||
#period: 10s
|
#period: 10s
|
||||||
#hosts: ["localhost:9092"]
|
#hosts: ["localhost:9092"]
|
||||||
|
|
||||||
|
#client_id: metricbeat
|
||||||
|
#retries: 3
|
||||||
|
#backoff: 250ms
|
||||||
|
|
||||||
|
# List of Topics to query metadata for. If empty, all topics will be queried.
|
||||||
|
#topics: []
|
||||||
|
|
||||||
|
# Optional SSL. By default is off.
|
||||||
|
# List of root certificates for HTTPS server verifications
|
||||||
|
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
|
||||||
|
|
||||||
|
# Certificate for SSL client authentication
|
||||||
|
#ssl.certificate: "/etc/pki/client/cert.pem"
|
||||||
|
|
||||||
|
# Client Certificate Key
|
||||||
|
#ssl.key: "/etc/pki/client/cert.key"
|
||||||
|
|
||||||
|
# SASL authentication
|
||||||
|
#username: ""
|
||||||
|
#password: ""
|
||||||
|
16
vendor/github.com/elastic/beats/metricbeat/module/kafka/partition/_meta/data.json
generated
vendored
16
vendor/github.com/elastic/beats/metricbeat/module/kafka/partition/_meta/data.json
generated
vendored
@ -11,14 +11,18 @@
|
|||||||
"id": 0
|
"id": 0
|
||||||
},
|
},
|
||||||
"offset": {
|
"offset": {
|
||||||
"newest": 13,
|
"newest": 11,
|
||||||
"oldest": 0
|
"oldest": 0
|
||||||
},
|
},
|
||||||
"partition": 0,
|
"partition": {
|
||||||
"replicas": [
|
"id": 0,
|
||||||
0
|
"insync_replica": true,
|
||||||
],
|
"leader": 0,
|
||||||
"topic": "testtopic"
|
"replica": 0
|
||||||
|
},
|
||||||
|
"topic": {
|
||||||
|
"name": "test-metricbeat-70692474374989458"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"metricset": {
|
"metricset": {
|
||||||
|
39
vendor/github.com/elastic/beats/metricbeat/module/kafka/partition/_meta/fields.yml
generated
vendored
39
vendor/github.com/elastic/beats/metricbeat/module/kafka/partition/_meta/fields.yml
generated
vendored
@ -16,14 +16,49 @@
|
|||||||
type: long
|
type: long
|
||||||
description: >
|
description: >
|
||||||
Oldest offset of the partition.
|
Oldest offset of the partition.
|
||||||
|
|
||||||
- name: partition
|
- name: partition
|
||||||
|
type: group
|
||||||
|
description: >
|
||||||
|
Partition data.
|
||||||
|
fields:
|
||||||
|
- name: id
|
||||||
type: long
|
type: long
|
||||||
description: >
|
description: >
|
||||||
Partition id.
|
Partition id.
|
||||||
- name: topic
|
|
||||||
|
- name: leader
|
||||||
|
type: long
|
||||||
|
description: >
|
||||||
|
Leader id (broker).
|
||||||
|
- name: isr
|
||||||
|
type: list
|
||||||
|
description: >
|
||||||
|
List of isr ids.
|
||||||
|
- name: replica
|
||||||
|
type: long
|
||||||
|
description: >
|
||||||
|
Replica id (broker).
|
||||||
|
|
||||||
|
- name: insync_replica
|
||||||
|
type: boolean
|
||||||
|
description: >
|
||||||
|
Indicates if replica is included in the in-sync replicate set (ISR).
|
||||||
|
|
||||||
|
- name: error.code
|
||||||
|
type: long
|
||||||
|
description: >
|
||||||
|
Error code from fetching partition.
|
||||||
|
|
||||||
|
- name: topic.error.code
|
||||||
|
type: long
|
||||||
|
description: >
|
||||||
|
topic error code.
|
||||||
|
- name: topic.name
|
||||||
type: keyword
|
type: keyword
|
||||||
description: >
|
description: >
|
||||||
Topic name
|
Topic name
|
||||||
|
|
||||||
- name: broker.id
|
- name: broker.id
|
||||||
type: long
|
type: long
|
||||||
description: >
|
description: >
|
||||||
@ -32,3 +67,5 @@
|
|||||||
type: keyword
|
type: keyword
|
||||||
description: >
|
description: >
|
||||||
Broker address
|
Broker address
|
||||||
|
|
||||||
|
|
||||||
|
38
vendor/github.com/elastic/beats/metricbeat/module/kafka/partition/config.go
generated
vendored
Normal file
38
vendor/github.com/elastic/beats/metricbeat/module/kafka/partition/config.go
generated
vendored
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
package partition
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/elastic/beats/libbeat/outputs"
|
||||||
|
)
|
||||||
|
|
||||||
|
type connConfig struct {
|
||||||
|
Retries int `config:"retries" validate:"min=0"`
|
||||||
|
Backoff time.Duration `config:"backoff" validate:"min=0"`
|
||||||
|
TLS *outputs.TLSConfig `config:"ssl"`
|
||||||
|
Username string `config:"username"`
|
||||||
|
Password string `config:"password"`
|
||||||
|
ClientID string `config:"client_id"`
|
||||||
|
Topics []string `config:"topics"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type metaConfig struct {
|
||||||
|
}
|
||||||
|
|
||||||
|
var defaultConfig = connConfig{
|
||||||
|
Retries: 3,
|
||||||
|
Backoff: 250 * time.Millisecond,
|
||||||
|
TLS: nil,
|
||||||
|
Username: "",
|
||||||
|
Password: "",
|
||||||
|
ClientID: "metricbeat",
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *connConfig) Validate() error {
|
||||||
|
if c.Username != "" && c.Password == "" {
|
||||||
|
return fmt.Errorf("password must be set when username is configured")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
305
vendor/github.com/elastic/beats/metricbeat/module/kafka/partition/partition.go
generated
vendored
305
vendor/github.com/elastic/beats/metricbeat/module/kafka/partition/partition.go
generated
vendored
@ -1,8 +1,14 @@
|
|||||||
package partition
|
package partition
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/elastic/beats/libbeat/common"
|
"github.com/elastic/beats/libbeat/common"
|
||||||
"github.com/elastic/beats/libbeat/logp"
|
"github.com/elastic/beats/libbeat/logp"
|
||||||
|
"github.com/elastic/beats/libbeat/outputs"
|
||||||
"github.com/elastic/beats/metricbeat/mb"
|
"github.com/elastic/beats/metricbeat/mb"
|
||||||
"github.com/elastic/beats/metricbeat/mb/parse"
|
"github.com/elastic/beats/metricbeat/mb/parse"
|
||||||
|
|
||||||
@ -19,75 +25,300 @@ func init() {
|
|||||||
// MetricSet type defines all fields of the partition MetricSet
|
// MetricSet type defines all fields of the partition MetricSet
|
||||||
type MetricSet struct {
|
type MetricSet struct {
|
||||||
mb.BaseMetricSet
|
mb.BaseMetricSet
|
||||||
client sarama.Client
|
|
||||||
|
broker *sarama.Broker
|
||||||
|
cfg *sarama.Config
|
||||||
|
id int32
|
||||||
|
topics []string
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new instance of the partition MetricSet
|
var noID int32 = -1
|
||||||
func New(base mb.BaseMetricSet) (mb.MetricSet, error) {
|
|
||||||
logp.Warn("EXPERIMENTAL: The %v %v metricset is experimental", base.Module().Name(), base.Name())
|
|
||||||
|
|
||||||
return &MetricSet{BaseMetricSet: base}, nil
|
var errFailQueryOffset = errors.New("operation failed")
|
||||||
|
|
||||||
|
// New create a new instance of the partition MetricSet
|
||||||
|
func New(base mb.BaseMetricSet) (mb.MetricSet, error) {
|
||||||
|
config := defaultConfig
|
||||||
|
if err := base.Module().UnpackConfig(&config); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
tls, err := outputs.LoadTLSConfig(config.TLS)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg := sarama.NewConfig()
|
||||||
|
cfg.Net.DialTimeout = base.Module().Config().Timeout
|
||||||
|
cfg.Net.ReadTimeout = base.Module().Config().Timeout
|
||||||
|
cfg.ClientID = config.ClientID
|
||||||
|
cfg.Metadata.Retry.Max = config.Retries
|
||||||
|
cfg.Metadata.Retry.Backoff = config.Backoff
|
||||||
|
if tls != nil {
|
||||||
|
cfg.Net.TLS.Enable = true
|
||||||
|
cfg.Net.TLS.Config = tls.BuildModuleConfig("")
|
||||||
|
}
|
||||||
|
if config.Username != "" {
|
||||||
|
cfg.Net.SASL.Enable = true
|
||||||
|
cfg.Net.SASL.User = config.Username
|
||||||
|
cfg.Net.SASL.Password = config.Password
|
||||||
|
}
|
||||||
|
|
||||||
|
broker := sarama.NewBroker(base.Host())
|
||||||
|
return &MetricSet{
|
||||||
|
BaseMetricSet: base,
|
||||||
|
broker: broker,
|
||||||
|
cfg: cfg,
|
||||||
|
id: noID,
|
||||||
|
topics: config.Topics,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *MetricSet) connect() (*sarama.Broker, error) {
|
||||||
|
b := m.broker
|
||||||
|
if err := b.Open(m.cfg); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if m.id != noID {
|
||||||
|
return b, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// current broker is bootstrap only. Get metadata to find id:
|
||||||
|
meta, err := queryMetadataWithRetry(b, m.cfg, m.topics)
|
||||||
|
if err != nil {
|
||||||
|
closeBroker(b)
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
addr := b.Addr()
|
||||||
|
for _, other := range meta.Brokers {
|
||||||
|
if other.Addr() == addr {
|
||||||
|
m.id = other.ID()
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if m.id == noID {
|
||||||
|
closeBroker(b)
|
||||||
|
err = fmt.Errorf("No advertised broker with address %v found", addr)
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return b, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fetch partition stats list from kafka
|
// Fetch partition stats list from kafka
|
||||||
func (m *MetricSet) Fetch() ([]common.MapStr, error) {
|
func (m *MetricSet) Fetch() ([]common.MapStr, error) {
|
||||||
if m.client == nil {
|
b, err := m.connect()
|
||||||
config := sarama.NewConfig()
|
|
||||||
config.Net.DialTimeout = m.Module().Config().Timeout
|
|
||||||
config.Net.ReadTimeout = m.Module().Config().Timeout
|
|
||||||
config.ClientID = "metricbeat"
|
|
||||||
|
|
||||||
client, err := sarama.NewClient([]string{m.Host()}, config)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
m.client = client
|
|
||||||
}
|
|
||||||
|
|
||||||
topics, err := m.client.Topics()
|
defer closeBroker(b)
|
||||||
|
response, err := queryMetadataWithRetry(b, m.cfg, m.topics)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
events := []common.MapStr{}
|
events := []common.MapStr{}
|
||||||
for _, topic := range topics {
|
evtBroker := common.MapStr{
|
||||||
partitions, err := m.client.Partitions(topic)
|
"id": m.id,
|
||||||
if err != nil {
|
"address": b.Addr(),
|
||||||
logp.Err("Fetch partition info for topic %s: %s", topic, err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, partition := range partitions {
|
for _, topic := range response.Topics {
|
||||||
newestOffset, err := m.client.GetOffset(topic, partition, sarama.OffsetNewest)
|
evtTopic := common.MapStr{
|
||||||
if err != nil {
|
"name": topic.Name,
|
||||||
logp.Err("Fetching newest offset information for partition %s in topic %s: %s", partition, topic, err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
oldestOffset, err := m.client.GetOffset(topic, partition, sarama.OffsetOldest)
|
if topic.Err != 0 {
|
||||||
if err != nil {
|
evtTopic["error"] = common.MapStr{
|
||||||
logp.Err("Fetching oldest offset information for partition %s in topic %s: %s", partition, topic, err)
|
"code": topic.Err,
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
broker, err := m.client.Leader(topic, partition)
|
for _, partition := range topic.Partitions {
|
||||||
if err != nil {
|
// partition offsets can be queried from leader only
|
||||||
logp.Err("Fetching brocker for partition %s in topic %s: %s", partition, topic, err)
|
if m.id != partition.Leader {
|
||||||
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// collect offsets for all replicas
|
||||||
|
for _, id := range partition.Replicas {
|
||||||
|
|
||||||
|
// Get oldest and newest available offsets
|
||||||
|
offOldest, offNewest, offOK, err := queryOffsetRange(b, id, topic.Name, partition.ID)
|
||||||
|
|
||||||
|
if !offOK {
|
||||||
|
if err == nil {
|
||||||
|
err = errFailQueryOffset
|
||||||
|
}
|
||||||
|
|
||||||
|
logp.Err("Failed to query kafka partition (%v:%v) offsets: %v",
|
||||||
|
topic.Name, partition.ID, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
partitionEvent := common.MapStr{
|
||||||
|
"id": partition.ID,
|
||||||
|
"leader": partition.Leader,
|
||||||
|
"replica": id,
|
||||||
|
"insync_replica": hasID(id, partition.Isr),
|
||||||
|
}
|
||||||
|
|
||||||
|
if partition.Err != 0 {
|
||||||
|
partitionEvent["error"] = common.MapStr{
|
||||||
|
"code": partition.Err,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// create event
|
||||||
event := common.MapStr{
|
event := common.MapStr{
|
||||||
"topic": topic,
|
"topic": evtTopic,
|
||||||
"partition": partition,
|
"broker": evtBroker,
|
||||||
|
"partition": partitionEvent,
|
||||||
"offset": common.MapStr{
|
"offset": common.MapStr{
|
||||||
"oldest": oldestOffset,
|
"newest": offNewest,
|
||||||
"newest": newestOffset,
|
"oldest": offOldest,
|
||||||
},
|
|
||||||
"broker": common.MapStr{
|
|
||||||
"id": broker.ID(),
|
|
||||||
"address": broker.Addr(),
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
events = append(events, event)
|
events = append(events, event)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return events, nil
|
return events, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func hasID(id int32, lst []int32) bool {
|
||||||
|
for _, other := range lst {
|
||||||
|
if id == other {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// queryOffsetRange queries the broker for the oldest and the newest offsets in
|
||||||
|
// a kafka topics partition for a given replica.
|
||||||
|
func queryOffsetRange(
|
||||||
|
b *sarama.Broker,
|
||||||
|
replicaID int32,
|
||||||
|
topic string,
|
||||||
|
partition int32,
|
||||||
|
) (int64, int64, bool, error) {
|
||||||
|
oldest, okOld, err := queryOffset(b, replicaID, topic, partition, sarama.OffsetOldest)
|
||||||
|
if err != nil {
|
||||||
|
return -1, -1, false, err
|
||||||
|
}
|
||||||
|
|
||||||
|
newest, okNew, err := queryOffset(b, replicaID, topic, partition, sarama.OffsetNewest)
|
||||||
|
if err != nil {
|
||||||
|
return -1, -1, false, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return oldest, newest, okOld && okNew, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func queryOffset(
|
||||||
|
b *sarama.Broker,
|
||||||
|
replicaID int32,
|
||||||
|
topic string,
|
||||||
|
partition int32,
|
||||||
|
time int64,
|
||||||
|
) (int64, bool, error) {
|
||||||
|
req := &sarama.OffsetRequest{}
|
||||||
|
if replicaID != noID {
|
||||||
|
req.SetReplicaID(replicaID)
|
||||||
|
}
|
||||||
|
req.AddBlock(topic, partition, time, 1)
|
||||||
|
resp, err := b.GetAvailableOffsets(req)
|
||||||
|
if err != nil {
|
||||||
|
return -1, false, err
|
||||||
|
}
|
||||||
|
|
||||||
|
block := resp.GetBlock(topic, partition)
|
||||||
|
if len(block.Offsets) == 0 {
|
||||||
|
return -1, false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return block.Offsets[0], true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func closeBroker(b *sarama.Broker) {
|
||||||
|
if ok, _ := b.Connected(); ok {
|
||||||
|
b.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func queryMetadataWithRetry(
|
||||||
|
b *sarama.Broker,
|
||||||
|
cfg *sarama.Config,
|
||||||
|
topics []string,
|
||||||
|
) (r *sarama.MetadataResponse, err error) {
|
||||||
|
err = withRetry(b, cfg, func() (e error) {
|
||||||
|
r, e = b.GetMetadata(&sarama.MetadataRequest{topics})
|
||||||
|
return
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func withRetry(
|
||||||
|
b *sarama.Broker,
|
||||||
|
cfg *sarama.Config,
|
||||||
|
f func() error,
|
||||||
|
) error {
|
||||||
|
var err error
|
||||||
|
for max := 0; max < cfg.Metadata.Retry.Max; max++ {
|
||||||
|
if ok, _ := b.Connected(); !ok {
|
||||||
|
if err = b.Open(cfg); err == nil {
|
||||||
|
err = f()
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
err = f()
|
||||||
|
}
|
||||||
|
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
retry, reconnect := checkRetryQuery(err)
|
||||||
|
if !retry {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
time.Sleep(cfg.Metadata.Retry.Backoff)
|
||||||
|
if reconnect {
|
||||||
|
closeBroker(b)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkRetryQuery(err error) (retry, reconnect bool) {
|
||||||
|
if err == nil {
|
||||||
|
return false, false
|
||||||
|
}
|
||||||
|
|
||||||
|
if err == io.EOF {
|
||||||
|
return true, true
|
||||||
|
}
|
||||||
|
|
||||||
|
k, ok := err.(sarama.KError)
|
||||||
|
if !ok {
|
||||||
|
return false, false
|
||||||
|
}
|
||||||
|
|
||||||
|
switch k {
|
||||||
|
case sarama.ErrLeaderNotAvailable, sarama.ErrReplicaNotAvailable,
|
||||||
|
sarama.ErrOffsetsLoadInProgress, sarama.ErrRebalanceInProgress:
|
||||||
|
return true, false
|
||||||
|
case sarama.ErrRequestTimedOut, sarama.ErrBrokerNotAvailable,
|
||||||
|
sarama.ErrNetworkException:
|
||||||
|
return true, true
|
||||||
|
}
|
||||||
|
|
||||||
|
return false, false
|
||||||
|
}
|
||||||
|
@ -64,13 +64,13 @@ func TestTopic(t *testing.T) {
|
|||||||
|
|
||||||
// Its possible that other topics exists -> select the right data
|
// Its possible that other topics exists -> select the right data
|
||||||
for _, data := range dataBefore {
|
for _, data := range dataBefore {
|
||||||
if data["topic"] == testTopic {
|
if data["topic"].(common.MapStr)["name"] == testTopic {
|
||||||
offsetBefore = data["offset"].(common.MapStr)["newest"].(int64)
|
offsetBefore = data["offset"].(common.MapStr)["newest"].(int64)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, data := range dataAfter {
|
for _, data := range dataAfter {
|
||||||
if data["topic"] == testTopic {
|
if data["topic"].(common.MapStr)["name"] == testTopic {
|
||||||
offsetAfter = data["offset"].(common.MapStr)["newest"].(int64)
|
offsetAfter = data["offset"].(common.MapStr)["newest"].(int64)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
4
vendor/github.com/elastic/beats/packetbeat/docs/version.asciidoc
generated
vendored
4
vendor/github.com/elastic/beats/packetbeat/docs/version.asciidoc
generated
vendored
@ -1,2 +1,2 @@
|
|||||||
:stack-version: 6.0.0-alpha1
|
:stack-version: 5.1.1
|
||||||
:doc-branch: master
|
:doc-branch: 5.1
|
||||||
|
2
vendor/github.com/elastic/beats/packetbeat/packetbeat.template-es2x.json
generated
vendored
2
vendor/github.com/elastic/beats/packetbeat/packetbeat.template-es2x.json
generated
vendored
@ -7,7 +7,7 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
2
vendor/github.com/elastic/beats/packetbeat/packetbeat.template.json
generated
vendored
2
vendor/github.com/elastic/beats/packetbeat/packetbeat.template.json
generated
vendored
@ -5,7 +5,7 @@
|
|||||||
"norms": false
|
"norms": false
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
FROM docker.elastic.co/kibana/kibana-ubuntu-base:latest
|
FROM docker.elastic.co/kibana/kibana-ubuntu-base:latest
|
||||||
MAINTAINER Elastic Docker Team <docker@elastic.co>
|
MAINTAINER Elastic Docker Team <docker@elastic.co>
|
||||||
|
|
||||||
ARG KIBANA_DOWNLOAD_URL=https://snapshots.elastic.co/downloads/kibana/kibana-6.0.0-alpha1-SNAPSHOT-linux-x86_64.tar.gz
|
ARG KIBANA_DOWNLOAD_URL=https://staging.elastic.co/5.1.1-acecfcb6/downloads/kibana/kibana-5.1.0-linux-x86_64.tar.gz
|
||||||
ARG X_PACK_URL
|
ARG X_PACK_URL
|
||||||
|
|
||||||
EXPOSE 5601
|
EXPOSE 5601
|
||||||
|
@ -2,8 +2,8 @@ FROM java:8-jre
|
|||||||
|
|
||||||
ENV LS_VERSION 5
|
ENV LS_VERSION 5
|
||||||
|
|
||||||
ENV VERSION 6.0.0-alpha1-SNAPSHOT
|
ENV VERSION 5.1.1
|
||||||
ENV URL https://snapshots.elastic.co/downloads/logstash/logstash-${VERSION}.tar.gz
|
ENV URL https://staging.elastic.co/5.1.1-acecfcb6/downloads/logstash/logstash-${VERSION}.tar.gz
|
||||||
ENV PATH $PATH:/opt/logstash-$VERSION/bin
|
ENV PATH $PATH:/opt/logstash-$VERSION/bin
|
||||||
|
|
||||||
# Cache variable can be set during building to invalidate the build cache with `--build-arg CACHE=$(date +%s) .`
|
# Cache variable can be set during building to invalidate the build cache with `--build-arg CACHE=$(date +%s) .`
|
||||||
|
6
vendor/github.com/elastic/beats/testing/environments/snapshot.yml
generated
vendored
6
vendor/github.com/elastic/beats/testing/environments/snapshot.yml
generated
vendored
@ -7,8 +7,8 @@ services:
|
|||||||
context: ./docker/elasticsearch
|
context: ./docker/elasticsearch
|
||||||
dockerfile: Dockerfile-snapshot
|
dockerfile: Dockerfile-snapshot
|
||||||
args:
|
args:
|
||||||
ELASTIC_VERSION: 6.0.0-alpha1-SNAPSHOT
|
ELASTIC_VERSION: 5.1.1
|
||||||
ES_DOWNLOAD_URL: https://snapshots.elastic.co/downloads/elasticsearch
|
ES_DOWNLOAD_URL: 'https://staging.elastic.co/5.1.1-acecfcb6/downloads/elasticsearch'
|
||||||
#XPACK: http://snapshots.elastic.co/downloads/packs/x-pack/x-pack-6.0.0-alpha1-SNAPSHOT.zip
|
#XPACK: http://snapshots.elastic.co/downloads/packs/x-pack/x-pack-6.0.0-alpha1-SNAPSHOT.zip
|
||||||
environment:
|
environment:
|
||||||
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
||||||
@ -26,5 +26,5 @@ services:
|
|||||||
context: ./docker/kibana
|
context: ./docker/kibana
|
||||||
dockerfile: Dockerfile-snapshot
|
dockerfile: Dockerfile-snapshot
|
||||||
args: [ ]
|
args: [ ]
|
||||||
#KIBANA_DOWNLOAD_URL: https://snapshots.elastic.co/downloads/kibana/kibana-6.0.0-alpha1-SNAPSHOT-linux-x86_64.tar.gz
|
#KIBANA_DOWNLOAD_URL: 'https://staging.elastic.co/5.1.0-1106bba6/downloads/kibana/kibana-5.1.0-linux-x86_64.tar.gz'
|
||||||
#X_PACK_URL: http://snapshots.elastic.co/downloads/kibana-plugins/x-pack/x-pack-6.0.0-alpha1-SNAPSHOT.zip
|
#X_PACK_URL: http://snapshots.elastic.co/downloads/kibana-plugins/x-pack/x-pack-6.0.0-alpha1-SNAPSHOT.zip
|
||||||
|
16
vendor/github.com/elastic/beats/vendor/github.com/Shopify/sarama/offset_request.go
generated
vendored
16
vendor/github.com/elastic/beats/vendor/github.com/Shopify/sarama/offset_request.go
generated
vendored
@ -22,11 +22,20 @@ func (b *offsetRequestBlock) decode(pd packetDecoder) (err error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type OffsetRequest struct {
|
type OffsetRequest struct {
|
||||||
|
replicaID *int32
|
||||||
blocks map[string]map[int32]*offsetRequestBlock
|
blocks map[string]map[int32]*offsetRequestBlock
|
||||||
|
|
||||||
|
storeReplicaID int32
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *OffsetRequest) encode(pe packetEncoder) error {
|
func (r *OffsetRequest) encode(pe packetEncoder) error {
|
||||||
pe.putInt32(-1) // replica ID is always -1 for clients
|
if r.replicaID == nil {
|
||||||
|
// default replica ID is always -1 for clients
|
||||||
|
pe.putInt32(-1)
|
||||||
|
} else {
|
||||||
|
pe.putInt32(*r.replicaID)
|
||||||
|
}
|
||||||
|
|
||||||
err := pe.putArrayLength(len(r.blocks))
|
err := pe.putArrayLength(len(r.blocks))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@ -100,6 +109,11 @@ func (r *OffsetRequest) requiredVersion() KafkaVersion {
|
|||||||
return minVersion
|
return minVersion
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r *OffsetRequest) SetReplicaID(id int32) {
|
||||||
|
r.storeReplicaID = id
|
||||||
|
r.replicaID = &r.storeReplicaID
|
||||||
|
}
|
||||||
|
|
||||||
func (r *OffsetRequest) AddBlock(topic string, partitionID int32, time int64, maxOffsets int32) {
|
func (r *OffsetRequest) AddBlock(topic string, partitionID int32, time int64, maxOffsets int32) {
|
||||||
if r.blocks == nil {
|
if r.blocks == nil {
|
||||||
r.blocks = make(map[string]map[int32]*offsetRequestBlock)
|
r.blocks = make(map[string]map[int32]*offsetRequestBlock)
|
||||||
|
@ -55,8 +55,11 @@ winlogbeat.event_logs:
|
|||||||
|
|
||||||
===== event_logs.batch_read_size
|
===== event_logs.batch_read_size
|
||||||
|
|
||||||
|
experimental[]
|
||||||
|
|
||||||
The maximum number of event log records to read from the Windows API in a single
|
The maximum number of event log records to read from the Windows API in a single
|
||||||
batch. The default batch size is 100. *{vista_and_newer}*
|
batch. The default batch size is 100. Most Windows versions return an error if
|
||||||
|
the value is larger than 1024. *{vista_and_newer}*
|
||||||
|
|
||||||
Winlogbeat starts a goroutine (a lightweight thread) to read from each
|
Winlogbeat starts a goroutine (a lightweight thread) to read from each
|
||||||
individual event log. The goroutine reads a batch of event log records using the
|
individual event log. The goroutine reads a batch of event log records using the
|
||||||
|
4
vendor/github.com/elastic/beats/winlogbeat/docs/version.asciidoc
generated
vendored
4
vendor/github.com/elastic/beats/winlogbeat/docs/version.asciidoc
generated
vendored
@ -1,2 +1,2 @@
|
|||||||
:stack-version: 6.0.0-alpha1
|
:stack-version: 5.1.1
|
||||||
:doc-branch: master
|
:doc-branch: 5.1
|
||||||
|
2
vendor/github.com/elastic/beats/winlogbeat/winlogbeat.template-es2x.json
generated
vendored
2
vendor/github.com/elastic/beats/winlogbeat/winlogbeat.template-es2x.json
generated
vendored
@ -7,7 +7,7 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
2
vendor/github.com/elastic/beats/winlogbeat/winlogbeat.template.json
generated
vendored
2
vendor/github.com/elastic/beats/winlogbeat/winlogbeat.template.json
generated
vendored
@ -5,7 +5,7 @@
|
|||||||
"norms": false
|
"norms": false
|
||||||
},
|
},
|
||||||
"_meta": {
|
"_meta": {
|
||||||
"version": "6.0.0-alpha1"
|
"version": "5.1.1"
|
||||||
},
|
},
|
||||||
"dynamic_templates": [
|
"dynamic_templates": [
|
||||||
{
|
{
|
||||||
|
Loading…
x
Reference in New Issue
Block a user