Update to libbeat 7.4.2

This commit is contained in:
Blerim Sheqa 2019-11-07 09:44:14 +01:00
parent 9b37630ebc
commit ced805d846
3486 changed files with 664477 additions and 245248 deletions

View File

@ -32,3 +32,7 @@ indent_style = tab
[Vagrantfile]
indent_size = 2
indent_style = space
[*.rl]
indent_size = 4
indent_style = space

View File

@ -4,21 +4,21 @@
# * @elastic/beats
# libbeat
/libbeat/ @elastic/beats
/auditbeat/ @elastic/beats
/packetbeat/ @elastic/beats
/filebeat/ @elastic/beats
/metricbeat/ @elastic/beats
/journalbeat/ @elastic/beats
/winlogbeat/ @elastic/beats
# /libbeat/ @elastic/beats
# /auditbeat/ @elastic/beats
# /packetbeat/ @elastic/beats
# /filebeat/ @elastic/beats
# /metricbeat/ @elastic/beats
# /journalbeat/ @elastic/beats
# /winlogbeat/ @elastic/beats
# Auditbeat
/auditbeat/module/ @elastic/secops
/x-pack/auditbeat/ @elastic/secops
/auditbeat/module/ @elastic/siem
/x-pack/auditbeat/ @elastic/siem
# Packetbeat
/packetbeat/protos/ @elastic/secops
/x-pack/packetbeat/ @elastic/secops
/packetbeat/protos/ @elastic/siem
/x-pack/packetbeat/ @elastic/siem
# Filebeat
/filebeat/module/ @elastic/infrastructure
@ -33,7 +33,11 @@
/metricbeat/module/elasticsearch/ @elastic/stack-monitoring
/metricbeat/module/kibana/ @elastic/stack-monitoring
/metricbeat/module/logstash/ @elastic/stack-monitoring
/metricbeat/module/beat/ @elastic/stack-monitoring
/x-pack/metricbeat/module/ @elastic/infrastructure
# Heartbeat
/heartbeat/ @elastic/uptime
# Winlogbeat
/x-pack/winlogbeat/ @elastic/siem

View File

@ -10,6 +10,7 @@
*beat/build
*beat/logs
*beat/data
x-pack/functionbeat/pkg
# Files
.DS_Store
@ -21,6 +22,8 @@ coverage.out
beat.db
*.keystore
mage_output_file.go
x-pack/functionbeat/*/fields.yml
x-pack/functionbeat/provider/*/functionbeat-*
# Editor swap files
*.swp

View File

@ -1 +1 @@
1.12.4
1.12.9

View File

@ -150,6 +150,15 @@ jobs:
go: $TRAVIS_GO_VERSION
stage: test
- os: osx
env: TARGETS="-C generator/metricbeat test"
go: $TRAVIS_GO_VERSION
stage: test
- os: osx
env: TARGETS="-C generator/beat test"
go: $TRAVIS_GO_VERSION
stage: test
# Docs
- os: linux
env: TARGETS="docs"

View File

@ -12,6 +12,65 @@ other Beats should be migrated.
Note: This changelog was only started after the 6.3 release.
=== Beats version 7.4.1
https://github.com/elastic/beats/compare/v7.4.0..v7.4.1[Check the HEAD diff
=== Beats version 7.4.0
https://github.com/elastic/beats/compare/v7.3.1..v7.4.0[Check the HEAD diff
==== Breaking changes
- For "metricbeat style" generated custom beats, the mage target `GoTestIntegration` has changed to `GoIntegTest` and `GoTestUnit` has changed to `GoUnitTest`. {pull}13341[13341]
==== Added
- Add ClientFactory to TCP input source to add SplitFunc/NetworkFuncs per client. {pull}8543[8543]
- Introduce beat.OutputChooses publisher mode. {pull}12996[12996]
- Ensure that beat.Processor, beat.ProcessorList, and processors.ProcessorList are compatible and can be composed more easily. {pull}12996[12996]
- Add support to close beat.Client via beat.CloseRef (a subset of context.Context). {pull}13031[13031]
- Add checks for types and formats used in fields definitions in `fields.yml` files. {pull}13188[13188]
- Makefile included in generator copies files from beats repository using `git archive` instead of cp. {pull}13193[13193]
=== Beats version 7.3.2
https://github.com/elastic/beats/compare/v7.3.1..v7.3.2[Check the HEAD diff]
=== Beats version 7.3.1
https://github.com/elastic/beats/compare/v7.3.0..v7.3.1[Check the HEAD diff]
=== Beats version 7.3.0
https://github.com/elastic/beats/compare/v7.2.1..v7.3.0[Check the HEAD diff]
==== Added
- Add new option `IgnoreAllErrors` to `libbeat.common.schema` for skipping fields that failed while converting. {pull}12089[12089]
=== Beats version 7.2.1
https://github.com/elastic/beats/compare/v7.2.0..v7.2.1[Check the HEAD diff]
=== Beats version 7.2.0
https://github.com/elastic/beats/compare/v7.1.1..v7.2.0[Check the HEAD diff]
==== Breaking changes
- Move Fields from package libbeat/common to libbeat/mapping. {pull}11198[11198]
==== Added
- Metricset generator generates beta modules by default now. {pull}10657[10657]
- The `beat.Event` accessor methods now support `@metadata` keys. {pull}10761[10761]
- Assertion for documented fields in tests fails if any of the fields in the tested event is documented as an alias. {pull}10921[10921]
- Support for Logger in the Metricset base instance. {pull}11106[11106]
- Filebeat modules can now use ingest pipelines in YAML format. {pull}11209[11209]
- Prometheus helper for metricbeat contains now `Namespace` field for `prometheus.MetricsMappings` {pull}11424[11424]
- Update Jinja2 version to 2.10.1. {pull}11817[11817]
- Reduce idxmgmt.Supporter interface and rework export commands to reuse logic. {pull}11777[11777],{pull}12065[12065],{pull}12067[12067],{pull}12160[12160]
- Update urllib3 version to 1.24.2 {pull}11930[11930]
- Add libbeat/common/cleanup package. {pull}12134[12134]
- Only Load minimal template if no fields are provided. {pull}12103[12103]
- Add new option `IgnoreAllErrors` to `libbeat.common.schema` for skipping fields that failed while converting. {pull}12089[12089]
- Deprecate setup cmds for `template` and `ilm-policy`. Add new setup cmd for `index-management`. {pull}12132[12132]
=== Beats version 7.1.1
https://github.com/elastic/beats/compare/v7.1.0..v7.1.1[Check the HEAD diff]

View File

@ -24,6 +24,8 @@ The list below covers the major changes between 7.0.0-rc2 and master only.
==== Bugfixes
- Stop using `mage:import` in community beats. This was ignoring the vendorized beats directory for some mage targets, using the code available in GOPATH, this causes inconsistencies and compilation problems if the version of the code in the GOPATH is different to the vendored one. Use of `mage:import` will continue to be unsupported in custom beats till beats is migrated to go modules, or mage supports vendored dependencies. {issue}13998[13998] {pull}[]
==== Added
- Metricset generator generates beta modules by default now. {pull}10657[10657]
@ -43,3 +45,4 @@ The list below covers the major changes between 7.0.0-rc2 and master only.
- Use the go-lookslike library for testing in heartbeat. Eventually the mapval package will be replaced with it. {pull}12540[12540]
- New ReporterV2 interfaces that can receive a context on `Fetch(ctx, reporter)`, or `Run(ctx, reporter)`. {pull}11981[11981]
- Generate configuration from `mage` for all Beats. {pull}12618[12618]
- Strip debug symbols from binaries to reduce binary sizes. {issue}12768[12768]

View File

@ -3,6 +3,631 @@
:issue: https://github.com/elastic/beats/issues/
:pull: https://github.com/elastic/beats/pull/
[[release-notes-7.4.1]]
=== Beats version 7.4.1
https://github.com/elastic/beats/compare/v7.4.0...v7.4.1[View commits]
==== Bugfixes
*Affecting all Beats*
- Recover from panics in the javascript process and log details about the failure to aid in future debugging. {pull}13690[13690]
- Make the script processor concurrency-safe. {issue}13690[13690] {pull}13857[13857]
*Auditbeat*
- Socket dataset: Fix start errors when IPv6 is disabled on the kernel. {issue}13953[13953] {pull}13966[13966]
*Filebeat*
- Fixed early expiration of templates (Netflow v9 and IPFIX). {pull}13821[13821]
- Fixed bad handling of sequence numbers when multiple observation domains were exported by a single device (Netflow V9 and IPFIX). {pull}13821[13821]
- cisco asa and ftd filesets: Fix parsing of message 106001. {issue}13891[13891] {pull}13903[13903]
- Fix merging of fields specified in global scope with fields specified under an input's scope. {issue}3628[3628] {pull}13909[13909]
- Fix delay in enforcing close_renamed and close_removed options. {issue}13488[13488] {pull}13907[13907]
- Fix missing netflow fields in index template. {issue}13768[13768] {pull}13914[13914]
- Fix cisco module's asa and ftd filesets parsing of domain names where an IP address is expected. {issue}14034[14034]
- Fixed increased memory usage with large files when multiline pattern does not match. {issue}14068[14068]
*Metricbeat*
- Mark Kibana usage stats as collected only if API call succeeds. {pull}13881[13881]
[[release-notes-7.4.0]]
=== Beats version 7.4.0
https://github.com/elastic/beats/compare/v7.3.1...v7.4.0[View commits]
==== Breaking changes
*Affecting all Beats*
- Update to Golang 1.12.7. {pull}12931[12931]
- Remove `in_cluster` configuration parameter for Kuberentes, now in-cluster configuration is used only if no other kubeconfig is specified {pull}13051[13051]
*Auditbeat*
- Socket dataset: New implementation using Kprobes for finer-grained monitoring and UDP support. {pull}13058[13058]
*Filebeat*
- Fix a race condition in the TCP input when close the client socket. {pull}13038[13038]
- cisco/asa fileset: Renamed log.original to event.original and cisco.asa.list_id to cisco.asa.rule_name. {pull}13286[13286]
- cisco/asa fileset: Fix parsing of 302021 message code. {pull}13476[13476]
*Metricbeat*
- Add new Dashboard for PostgreSQL database stats {pull}13187[13187]
- Add new dashboard for CouchDB database {pull}13198[13198]
- Add new dashboard for Ceph cluster stats {pull}13216[13216]
- Add new dashboard for Aerospike database stats {pull}13217[13217]
- Add new dashboard for Couchbase cluster stats {pull}13212[13212]
- Add new dashboard for Prometheus server stats {pull}13126[13126]
- Add statistic option into cloudwatch metricset. If there is no statistic method specified, default is to collect Average, Sum, Maximum, Minimum and SampleCount. {issue}12370[12370] {pull}12840[12840]
- Fix rds metricset dashboard. {pull}13721[13721]
*Functionbeat*
- Separate management and functions in Functionbeat. {pull}12939[12939]
==== Bugfixes
*Affecting all Beats*
- ILM: Use GET instead of HEAD when checking for alias to expose detailed error message. {pull}12886[12886]
- Fix unexpected stops on docker autodiscover when a container is restarted before `cleanup_timeout`. {issue}12962[12962] {pull}13127[13127]
- Fix some incorrect types and formats in field.yml files. {pull}13188[13188]
- Load DLLs only from Windows system directory. {pull}13234[13234] {pull}13384[13384]
- Fix mapping for kubernetes.labels and kubernetes.annotations in add_kubernetes_metadata. {issue}12638[12638] {pull}13226[13226]
- Fix case insensitive regular expressions not working correctly. {pull}13250[13250]
*Auditbeat*
- Host dataset: Export Host fields to gob encoder. {pull}12940[12940]
*Filebeat*
- Fix filebeat autodiscover fileset hint for container input. {pull}13296[13296]
- Fix incorrect references to index patterns in AWS and CoreDNS dashboards. {pull}13303[13303]
- Fix timezone parsing of system module ingest pipelines. {pull}13308[13308]
- Fix timezone parsing of elasticsearch module ingest pipelines. {pull}13367[13367]
- Change iis url path grok pattern from URIPATH to NOTSPACE. {issue}12710[12710] {pull}13225[13225] {issue}7951[7951] {pull}13378[13378]
- Add timezone information to apache error fileset. {issue}12772[12772] {pull}13304[13304]
- Fix timezone parsing of nginx module ingest pipelines. {pull}13369[13369]
- Allow path variables to be used in files loaded from modules.d. {issue}13184[13184]
- Fix incorrect field references in envoyproxy dashboard {issue}13420[13420] {pull}13421[13421]
*Heartbeat*
- Fix integer comparison on JSON responses. {pull}13348[13348]
*Metricbeat*
- Ramdisk is not filtered out when collecting disk performance counters in diskio metricset {issue}12814[12814] {pull}12829[12829]
- Fix redis key metricset dashboard references to index pattern. {pull}13303[13303]
- Check if fields in DBInstance is nil in rds metricset. {pull}13294[13294] {issue}13037[13037]
- Fix silent failures in kafka and prometheus module. {pull}13353[13353] {issue}13252[13252]
- Fix module-level fields in Kubernetes metricsets. {pull}13433[13433] {pull}13544[13544]
- Fix panic in Redis Key metricset when collecting information from a removed key. {pull}13426[13426]
- In the elasticsearch/node_stats metricset, if xpack is enabled, make parsing of ES node load average optional as ES on Windows doesn't report load average. {pull}12866[12866]
- Print errors that were being omitted in vSphere metricsets. {pull}12816[12816]
- Fix issue with aws cloudwatch module where dimensions and/or namespaces that contain space are not being parsed correctly {pull}13389[13389]
- Fix reporting empty events in cloudwatch metricset. {pull}13458[13458]
- Fix data race affecting config validation at startup. {issue}13005[13005]
*Packetbeat*
- Fix parsing the extended RCODE in the DNS parser. {pull}12805[12805]
*Functionbeat*
- Fix Cloudwatch logs timestamp to use timestamp of the log record instead of when the record was processed {pull}13291[13291]
- Look for the keystore under the correct path. {pull}13332[13332]
==== Added
*Affecting all Beats*
- Add support for reading the `network.iana_number` field by default to the community_id processor. {pull}12701[12701]
- Add a check so alias creation explicitely fails if there is an index with the same name. {pull}13070[13070]
- Update kubernetes watcher to use official client-go libraries. {pull}13051[13051]
- Add support for unix epoch time values in the `timestamp` processor. {pull}13319[13319]
- add_host_metadata is now GA. {pull}13148[13148]
- Add an `ignore_missing` configuration option the `drop_fields` processor. {pull}13318[13318]
- Add `registered_domain` processor for deriving the registered domain from a given FQDN. {pull}13326[13326]
- Add support for RFC3339 time zone offsets in JSON output. {pull}13227[13227]
- Added `monitoring.cluster_uuid` setting to associate Beat data with specified ES cluster in Stack Monitoring UI. {pull}13182[13182]
*Filebeat*
- Add netflow dashboards based on Logstash netflow. {pull}12857[12857]
- Parse more fields from Elasticsearch slowlogs. {pull}11939[11939]
- Update module pipelines to enrich events with autonomous system fields. {pull}13036[13036]
- Add module for ingesting IBM MQ logs. {pull}8782[8782]
- Add S3 input to retrieve logs from AWS S3 buckets. {pull}12640[12640] {issue}12582[12582]
- Add aws module s3access metricset. {pull}13170[13170] {issue}12880[12880]
- Update Suricata module to populate ECS DNS fields and handle EVE DNS version 2. {issue}13320[13320] {pull}13329[13329]
- Update PAN-OS fileset to use the ECS NAT fields. {issue}13320[13320] {pull}13330[13330]
- Add fields to the Zeek DNS fileset for ECS DNS. {issue}13320[13320] {pull}13324[13324]
- Add container image in Kubernetes metadata {pull}13356[13356] {issue}12688[12688]
- Add module for ingesting Cisco FTD logs over syslog. {pull}13286[13286]
*Heartbeat*
- Record HTTP body metadata and optionally contents in `http.response.body.*` fields. {pull}13022[13022]
*Metricbeat*
- Add Kubernetes proxy dashboard to Kubernetes module {pull}12734[12734]
- Add Kubernetes controller manager dashboard to Kubernetes module {pull}12744[12744]
- Add metrics to kubernetes apiserver metricset. {pull}12922[12922]
- Add Kubernetes scheduler dashboard to Kubernetes module {pull}12749[12749]
- Collect client provided name for rabbitmq connection. {issue}12851[12851] {pull}12852[12852]
- Add support to load default aws config file to get credentials. {pull}12727[12727] {issue}12708[12708]
- Add statistic option into cloudwatch metricset. {issue}12370[12370] {pull}12840[12840]
- Add support for kubernetes cronjobs {pull}13001[13001]
- Add cgroup memory stats to docker/memory metricset {pull}12916[12916]
- Add AWS elb metricset. {pull}12952[12952] {issue}11701[11701]
- Add AWS ebs metricset. {pull}13167[13167] {issue}11699[11699]
- Add `metricset.period` field with the configured fetching period. {pull}13242[13242] {issue}12616[12616]
- Add rate metrics for ec2 metricset. {pull}13203[13203]
- Add Performance metricset to Oracle module {pull}12547[12547]
- Use DefaultMetaGeneratorConfig in MetadataEnrichers to initialize configurations {pull}13414[13414]
- Add module for statsd. {pull}13109[13109]
*Packetbeat*
- Update DNS protocol plugin to produce events with ECS fields for DNS. {issue}13320[13320] {pull}13354[13354]
*Functionbeat*
- Add timeout option to reference configuration. {pull}13351[13351]
- Configurable tags for Lambda functions. {pull}13352[13352]
- Add input for Cloudwatch logs through Kinesis. {pull}13317[13317]
- Enable Logstash output. {pull}13345[13345]
*Winlogbeat*
- Add support for event ID 4634 and 4647 to the Security module. {pull}12906[12906]
- Add `network.community_id` to Sysmon network events (event ID 3). {pull}13034[13034]
- Add `event.module` to Winlogbeat modules. {pull}13047[13047]
- Add `event.category: process` and `event.type: process_start/process_end` to Sysmon process events (event ID 1 and 5). {pull}13047[13047]
- Add support for event ID 4672 to the Security module. {pull}12975[12975]
- Add support for event ID 22 (DNS query) to the Sysmon module. {pull}12960[12960]
- Add support for event ID 4634 and 4647 to the Security module. {pull}12906[12906]
- Add `network.community_id` to Sysmon network events (event ID 3). {pull}13034[13034]
- Add `event.module` to Winlogbeat modules. {pull}13047[13047]
- Add `event.category: process` and `event.type: process_start/process_end` to Sysmon process events (event ID 1 and 5). {pull}13047[13047]
- Add support for event ID 4672 to the Security module. {pull}12975[12975]
- Add support for event ID 22 (DNS query) to the Sysmon module. {pull}12960[12960]
- Add certain winlog.event_data.* fields to the index template. {issue}13700[13700] {pull}13704[13704]
[[release-notes-7.3.2]]
=== Beats version 7.3.2
https://github.com/elastic/beats/compare/v7.3.1...v7.3.2[View commits]
==== Bugfixes
*Filebeat*
- Fix filebeat autodiscover fileset hint for container input. {pull}13296[13296]
- Fix timezone parsing of system module ingest pipelines. {pull}13308[13308]
- Fix timezone parsing of elasticsearch module ingest pipelines. {pull}13367[13367]
- Fix timezone parsing of nginx module ingest pipelines. {pull}13369[13369]
*Metricbeat*
- Fix module-level fields in Kubernetes metricsets. {pull}13433[13433] {pull}13544[13544]
- Fix panic in Redis Key metricset when collecting information from a removed key. {pull}13426[13426]
[[release-notes-7.3.1]]
=== Beats version 7.3.1
https://github.com/elastic/beats/compare/v7.3.0...v7.3.1[View commits]
==== Bugfixes
*Affecting all Beats*
- Fix install-service.ps1's ability to set Windows service's delay start configuration. {pull}13173[13173]
- Fix `decode_base64_field` processor. {pull}13092[13092], {pull}13144[13144]
*Filebeat*
- Fix multiline pattern in Postgres which was too permissive. {issue}12078[12078] {pull}13069[13069]
*Metricbeat*
- Fix `logstash/node_stats` metricset to also collect `logstash_stats.events.duration_in_millis` field when `xpack.enabled: true` is set. {pull}13082[13082]
- Fix `logstash/node` metricset to also collect `logstash_state.pipeline.representation.{type,version,hash}` fields when `xpack.enabled: true` is set. {pull}13133[13133]
==== Added
*Metricbeat*
- Make the `beat` module defensive about determining ES cluster UUID when `xpack.enabled: true` is set. {pull}13020[13020]
[[release-notes-7.3.0]]
=== Beats version 7.3.0
https://github.com/elastic/beats/compare/v7.2.0...v7.3.0[View commits]
==== Breaking changes
*Affecting all Beats*
- Update to ECS 1.0.1. {pull}12284[12284] {pull}12317[12317]
- Default of output.kafka.metadata.full is set to false by now. This reduced the amount of metadata to be queried from a kafka cluster. {pull}12738[12738]
*Filebeat*
- `convert_timezone` option is removed and locale is always added to the event so timezone is used when parsing the timestamp, this behaviour can be overriden with processors. {pull}12410[12410]
==== Bugfixes
*Affecting all Beats*
- Fix typo in TLS renegotiation configuration and setting the option correctly {issue}10871[10871], {pull}12354[12354]
- Add configurable bulk_flush_frequency in kafka output. {pull}12254[12254]
- Fixed setting bulk max size in kafka output. {pull}12254[12254]
- Add additional nil pointer checks to Docker client code to deal with vSphere Integrated Containers {pull}12628[12628]
- Fix seccomp policy preventing some features to function properly on 32bit Linux systems. {issue}12990[12990] {pull}13008[13008]
*Auditbeat*
- Package dataset: Close librpm handle. {pull}12215[12215]
- Package dataset: Improve dpkg parsing. {pull}12325[12325]
- Host dataset: Fix reboot detection logic. {pull}12591[12591]
- Add syscalls used by librpm for the system/package dataset to the default Auditbeat seccomp policy. {issue}12578[12578] {pull}12617[12617]
- Host dataset: Export Host fields to gob encoder. {pull}12940[12940]
*Filebeat*
- Parse timezone in PostgreSQL logs as part of the timestamp {pull}12338[12338]
- When TLS is configured for the TCP input and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584]
- Syslog input will now omit the `process` object from events if it is empty. {pull}12700[12700]
- Apply `max_message_size` to incoming message buffer. {pull}11966[11966]
*Heartbeat*
*Journalbeat*
- Iterate over journal correctly, so no duplicate entries are sent. {pull}12716[12716]
- Preserve host name when reading from remote journal. {pull}12714[12714]
*Metricbeat*
- Refactored Windows perfmon metricset: replaced method to retrieve counter paths with PdhExpandWildCardPathW, separated code by responsibility, removed unused functions {pull}12212[12212]
- Validate that kibana/status metricset cannot be used when xpack is enabled. {pull}12264[12264]
- In the kibana/stats metricset, only log error (don't also index it) if xpack is enabled. {pull}12265[12265]
- Fix an issue listing all processes when run under Windows as a non-privileged user. {issue}12301[12301] {pull}12475[12475]
- When TLS is configured for the http metricset and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584]
- Reuse connections in PostgreSQL metricsets. {issue}12504[12504] {pull}12603[12603]
- PdhExpandWildCardPathW will not expand counter paths in 32 bit windows systems, workaround will use a different function.{issue}12590[12590]{pull}12622[12622]
- Print errors that were being omitted in vSphere metricsets {pull}12816[12816]
- In the elasticsearch/node_stats metricset, if xpack is enabled, make parsing of ES node load average optional as ES on Windows doesn't report load average. {pull}12866[12866]
- Fix incoherent behaviour in redis key metricset when keyspace is specified both in host URL and key pattern {pull}12913[12913]
- Fix connections leak in redis module {pull}12914[12914] {pull}12950[12950]
*Packetbeat*
==== Added
*Affecting all Beats*
- Add `proxy_disable` output flag to explicitly ignore proxy environment variables. {issue}11713[11713] {pull}12243[12243]
- Processor `add_cloud_metadata` adds fields `cloud.account.id` and `cloud.image.id` for AWS EC2. {pull}12307[12307]
- Add `decode_base64_field` processor for decoding base64 field. {pull}11914[11914]
- Add aws overview dashboard. {issue}11007[11007] {pull}12175[12175]
- Add `decompress_gzip_field` processor. {pull}12733[12733]
- Add `timestamp` processor for parsing time fields. {pull}12699[12699]
- Add Oracle Tablespaces Dashboard {pull}12736[12736]
- Add `proxy_disable` output flag to explicitly ignore proxy environment variables. {issue}11713[11713] {pull}12243[12243]
*Auditbeat*
*Filebeat*
- Add timeouts on communication with docker daemon. {pull}12310[12310]
- Add specific date processor to convert timezones so same pipeline can be used when convert_timezone is enabled or disabled. {pull}12253[12253]
- Add MSSQL module {pull}12079[12079]
- Add ISO8601 date parsing support for system module. {pull}12568[12568] {pull}12578[12579]
- Update Kubernetes deployment manifest to use `container` input. {pull}12632[12632]
- Add `google-pubsub` input type for consuming messages from a Google Cloud Pub/Sub topic subscription. {pull}12746[12746]
- Add module for ingesting Cisco IOS logs over syslog. {pull}12748[12748]
- Add module for ingesting Google Cloud VPC flow logs. {pull}12747[12747]
- Report host metadata for Filebeat logs in Kubernetes. {pull}12790[12790]
*Metricbeat*
- Add overview dashboard to Consul module {pull}10665[10665]
- New fields were added in the mysql/status metricset. {pull}12227[12227]
- Add Kubernetes metricset `proxy`. {pull}12312[12312]
- Always report Pod UID in the `pod` metricset. {pull}12345[12345]
- Add Vsphere Virtual Machine operating system to `os` field in Vsphere virtualmachine module. {pull}12391[12391]
- Add CockroachDB module. {pull}12467[12467]
- Add support for metricbeat modules based on existing modules (a.k.a. light modules) {issue}12270[12270] {pull}12465[12465]
- Add a system/entropy metricset {pull}12450[12450]
- Add kubernetes metricset `controllermanager` {pull}12409[12409]
- Allow redis URL format in redis hosts config. {pull}12408[12408]
- Add tags into ec2 metricset. {issue}[12263]12263 {pull}12372[12372]
- Add kubernetes metricset `scheduler` {pull}12521[12521]
- Add Kubernetes scheduler dashboard to Kubernetes module {pull}12749[12749]
- Add `beat` module. {pull}12181[12181] {pull}12615[12615]
- Collect tags for cloudwatch metricset in aws module. {issue}[12263]12263 {pull}12480[12480]
- Add AWS RDS metricset. {pull}11620[11620] {issue}10054[10054]
- Add Oracle Module {pull}11890[11890]
- Add Kubernetes proxy dashboard to Kubernetes module {pull}12734[12734]
- Add Kubernetes controller manager dashboard to Kubernetes module {pull}12744[12744]
*Functionbeat*
- Export automation templates used to create functions. {pull}11923[11923]
- Configurable Amazon endpoint. {pull}12369[12369]
==== Deprecated
*Filebeat*
- `postgresql.log.timestamp` field is deprecated in favour of `@timestamp`. {pull}12338[12338]
[[release-notes-7.2.1]]
=== Beats version 7.2.1
https://github.com/elastic/beats/compare/v7.2.0...v7.2.1[View commits]
==== Bugfixes
*Affecting all Beats*
- Fix Central Management enroll under Windows {issue}12797[12797] {pull}12799[12799]
- Fixed a crash under Windows when fetching processes information. {pull}12833[12833]
*Filebeat*
- Add support for client addresses with port in Apache error logs {pull}12695[12695]
- Load correct pipelines when system module is configured in modules.d. {pull}12340[12340]
*Metricbeat*
- Fix wrong uptime reporting by system/uptime metricset under Windows. {pull}12915[12915]
*Packetbeat*
- Limit memory usage of Redis replication sessions. {issue}12657[12657]
[[release-notes-7.2.0]]
=== Beats version 7.2.0
https://github.com/elastic/beats/compare/v7.1.1...v7.2.0[View commits]
==== Breaking changes
*Affecting all Beats*
- Update to Golang 1.12.4. {pull}11782[11782]
*Auditbeat*
- Auditd module: Normalized value of `event.category` field from `user-login` to `authentication`. {pull}11432[11432]
- Auditd module: Unset `auditd.session` and `user.audit.id` fields are removed from audit events. {issue}11431[11431] {pull}11815[11815]
- Socket dataset: Exclude localhost by default {pull}11993[11993]
*Filebeat*
- Add read_buffer configuration option. {pull}11739[11739]
*Heartbeat*
- Removed the `add_host_metadata` and `add_cloud_metadata` processors from the default config. These don't fit well with ECS for Heartbeat and were rarely used.
*Journalbeat*
*Metricbeat*
- Add new option `OpMultiplyBuckets` to scale histogram buckets to avoid decimal points in final events {pull}10994[10994]
- system/raid metricset now uses /sys/block instead of /proc/mdstat for data. {pull}11613[11613]
*Packetbeat*
- Add support for mongodb opcode 2013 (OP_MSG). {issue}6191[6191] {pull}8594[8594]
- NFSv4: Always use opname `ILLEGAL` when failed to match request to a valid nfs operation. {pull}11503[11503]
*Winlogbeat*
*Functionbeat*
==== Bugfixes
*Affecting all Beats*
- Ensure all beat commands respect configured settings. {pull}10721[10721]
- Add missing fields and test cases for libbeat add_kubernetes_metadata processor. {issue}11133[11133], {pull}11134[11134]
- decode_json_field: process objects and arrays only {pull}11312[11312]
- decode_json_field: do not process arrays when flag not set. {pull}11318[11318]
- Report faulting file when config reload fails. {pull}11304[11304]
- Fix a typo in libbeat/outputs/transport/client.go by updating `c.conn.LocalAddr()` to `c.conn.RemoteAddr()`. {pull}11242[11242]
- Management configuration backup file will now have a timestamps in their name. {pull}11034[11034]
- [CM] Parse enrollment_token response correctly {pull}11648[11648]
- Not hiding error in case of http failure using elastic fetcher {pull}11604[11604]
- Escape BOM on JsonReader before trying to decode line {pull}11661[11661]
- Fix matching of string arrays in contains condition. {pull}11691[11691]
- Replace wmi queries with win32 api calls as they were consuming CPU resources {issue}3249[3249] and {issue}11840[11840]
- Fix queue.spool.write.flush.events config type. {pull}12080[12080]
- Fixed a memory leak when using the add_process_metadata processor under Windows. {pull}12100[12100]
- Fix of docker json parser for missing "log" jsonkey in docker container's log {issue}11464[11464]
- Fixed Beat ID being reported by GET / API. {pull}12180[12180]
- Add host.os.codename to fields.yml. {pull}12261[12261]
- Fix `@timestamp` being duplicated in events if `@timestamp` is set in a
processor (or by any code utilizing `PutValue()` on a `beat.Event`).
- Fix leak in script processor when using Javascript functions in a processor chain. {pull}12600[12600]
*Auditbeat*
- Process dataset: Fixed a memory leak under Windows. {pull}12100[12100]
- Login dataset: Fix re-read of utmp files. {pull}12028[12028]
- Package dataset: Fixed a crash inside librpm after Auditbeat has been running for a while. {issue}12147[12147] {pull}12168[12168]
- Fix formatting of config files on macOS and Windows. {pull}12148[12148]
- Fix direction of incoming IPv6 sockets. {pull}12248[12248]
- Package dataset: Auto-detect package directories. {pull}12289[12289]
- System module: Start system module without host ID. {pull}12373[12373]
*Filebeat*
- Add support for Cisco syslog format used by their switch. {pull}10760[10760]
- Cover empty request data, url and version in Apache2 module{pull}10730[10730]
- Fix registry entries not being cleaned due to race conditions. {pull}10747[10747]
- Improve detection of file deletion on Windows. {pull}10747[10747]
- Add missing Kubernetes metadata fields to Filebeat CoreDNS module, and fix a documentation error. {pull}11591[11591]
- Reduce memory usage if long lines are truncated to fit `max_bytes` limit. The line buffer is copied into a smaller buffer now. This allows the runtime to release unused memory earlier. {pull}11524[11524]
- Fix memory leak in Filebeat pipeline acker. {pull}12063[12063]
- Fix goroutine leak caused on initialization failures of log input. {pull}12125[12125]
- Fix goroutine leak on non-explicit finalization of log input. {pull}12164[12164]
- Require client_auth by default when ssl is enabled for tcp input {pull}12333[12333]
- Fix timezone offset parsing in system/syslog. {pull}12529[12529]
*Heartbeat*
- Fix NPEs / resource leaks when executing config checks. {pull}11165[11165]
- Fix duplicated IPs on `mode: all` monitors. {pull}12458[12458]
*Journalbeat*
- Use backoff when no new events are found. {pull}11861[11861]
*Metricbeat*
- Change diskio metrics retrieval method (only for Windows) from wmi query to DeviceIOControl function using the IOCTL_DISK_PERFORMANCE control code {pull}11635[11635]
- Call GetMetricData api per region instead of per instance. {issue}11820[11820] {pull}11882[11882]
- Update documentation with cloudwatch:ListMetrics permission. {pull}11987[11987]
- Check permissions in system socket metricset based on capabilities. {pull}12039[12039]
- Get process information from sockets owned by current user when system socket metricset is run without privileges. {pull}12039[12039]
- Avoid generating hints-based configuration with empty hosts when no exposed port is suitable for the hosts hint. {issue}8264[8264] {pull}12086[12086]
- Fixed a socket leak in the postgresql module under Windows when SSL is disabled on the server. {pull}11393[11393]
- Change some field type from scaled_float to long in aws module. {pull}11982[11982]
- Fixed RabbitMQ `queue` metricset gathering when `consumer_utilisation` is set empty at the metrics source {pull}12089[12089]
- Fix direction of incoming IPv6 sockets. {pull}12248[12248]
- Ignore prometheus metrics when their values are NaN or Inf. {pull}12084[12084] {issue}10849[10849]
- Require client_auth by default when ssl is enabled for module http metricset server{pull}12333[12333]
- The `elasticsearch/index_summary` metricset gracefully handles an empty Elasticsearch cluster when `xpack.enabled: true` is set. {pull}12489[12489] {issue}12487[12487]
*Packetbeat*
- Prevent duplicate packet loss error messages in HTTP events. {pull}10709[10709]
- Fixed a memory leak when using process monitoring under Windows. {pull}12100[12100]
- Improved debug logging efficiency in PGQSL module. {issue}12150[12150]
*Winlogbeat*
*Functionbeat*
- Fix function name reference for Kinesis streams in CloudFormation templates {pull}11646[11646]
==== Added
*Affecting all Beats*
- Add an option to append to existing logs rather than always rotate on start. {pull}11953[11953]
- Add `network` condition to processors for matching IP addresses against CIDRs. {pull}10743[10743]
- Add if/then/else support to processors. {pull}10744[10744]
- Add `community_id` processor for computing network flow hashes. {pull}10745[10745]
- Add output test to kafka output {pull}10834[10834]
- Gracefully shut down on SIGHUP {pull}10704[10704]
- New processor: `copy_fields`. {pull}11303[11303]
- Add `error.message` to events when `fail_on_error` is set in `rename` and `copy_fields` processors. {pull}11303[11303]
- New processor: `truncate_fields`. {pull}11297[11297]
- Allow a beat to ship monitoring data directly to an Elasticsearch monitoring clsuter. {pull}9260[9260]
- Updated go-seccomp-bpf library to v1.1.0 which updates syscall lists for Linux v5.0. {pull}NNNN[NNNN]
- Add `add_observer_metadata` processor. {pull}11394[11394]
- Add `decode_csv_fields` processor. {pull}11753[11753]
- Add `convert` processor for converting data types of fields. {issue}8124[8124] {pull}11686[11686]
- New `extract_array` processor. {pull}11761[11761]
- Add number of goroutines to reported metrics. {pull}12135[12135]
*Auditbeat*
- Auditd module: Add `event.outcome` and `event.type` for ECS. {pull}11432[11432]
- Process: Add file hash of process executable. {pull}11722[11722]
- Socket: Add network.transport and network.community_id. {pull}12231[12231]
- Host: Fill top-level host fields. {pull}12259[12259]
*Filebeat*
- Add more info to message logged when a duplicated symlink file is found {pull}10845[10845]
- Add option to configure docker input with paths {pull}10687[10687]
- Add Netflow module to enrich flow events with geoip data. {pull}10877[10877]
- Set `event.category: network_traffic` for Suricata. {pull}10882[10882]
- Allow custom default settings with autodiscover (for example, use of CRI paths for logs). {pull}12193[12193]
- Allow to disable hints based autodiscover default behavior (fetching all logs). {pull}12193[12193]
- Change Suricata module pipeline to handle `destination.domain` being set if a reverse DNS processor is used. {issue}10510[10510]
- Add the `network.community_id` flow identifier to field to the IPTables, Suricata, and Zeek modules. {pull}11005[11005]
- New Filebeat coredns module to ingest coredns logs. It supports both native coredns deployment and coredns deployment in kubernetes. {pull}11200[11200]
- New module for Cisco ASA logs. {issue}9200[9200] {pull}11171[11171]
- Added support for Cisco ASA fields to the netflow input. {pull}11201[11201]
- Configurable line terminator. {pull}11015[11015]
- Add Filebeat envoyproxy module. {pull}11700[11700]
- Add apache2(httpd) log path (`/var/log/httpd`) to make apache2 module work out of the box on Redhat-family OSes. {issue}11887[11887] {pull}11888[11888]
- Add support to new MongoDB additional diagnostic information {pull}11952[11952]
- New module `panw` for Palo Alto Networks PAN-OS logs. {pull}11999[11999]
- Add RabbitMQ module. {pull}12032[12032]
- Add new `container` input. {pull}12162[12162]
*Heartbeat*
- Enable `add_observer_metadata` processor in default config. {pull}11394[11394]
*Journalbeat*
*Metricbeat*
- Add AWS SQS metricset. {pull}10684[10684] {issue}10053[10053]
- Add AWS s3_request metricset. {pull}10949[10949] {issue}10055[10055]
- Add s3_daily_storage metricset. {pull}10940[10940] {issue}10055[10055]
- Add `coredns` metricbeat module. {pull}10585[10585]
- Add SSL support for Metricbeat HTTP server. {pull}11482[11482] {issue}11457[11457]
- The `elasticsearch.index` metricset (with `xpack.enabled: true`) now collects `refresh.external_total_time_in_millis` fields from Elasticsearch. {pull}11616[11616]
- Allow module configurations to have variants {pull}9118[9118]
- Add `timeseries.instance` field calculation. {pull}10293[10293]
- Added new disk states and raid level to the system/raid metricset. {pull}11613[11613]
- Added `path_name` and `start_name` to service metricset on windows module {issue}8364[8364] {pull}11877[11877]
- Add check on object name in the counter path if the instance name is missing {issue}6528[6528] {pull}11878[11878]
- Add AWS cloudwatch metricset. {pull}11798[11798] {issue}11734[11734]
- Add `regions` in aws module config to specify target regions for querying cloudwatch metrics. {issue}11932[11932] {pull}11956[11956]
- Keep `etcd` followers members from reporting `leader` metricset events {pull}12004[12004]
- Add validation for elasticsearch and kibana modules' metricsets when xpack.enabled is set to true. {pull}12386[12386]
*Packetbeat*
*Functionbeat*
- New options to configure roles and VPC. {pull}11779[11779]
*Winlogbeat*
- Add support for reading from .evtx files. {issue}4450[4450]
==== Deprecated
*Affecting all Beats*
*Filebeat*
- `docker` input is deprecated in favour `container`. {pull}12162[12162]
*Heartbeat*
*Journalbeat*
*Metricbeat*
*Packetbeat*
*Winlogbeat*
*Functionbeat*
==== Known Issue
*Journalbeat*
[[release-notes-7.1.1]]
=== Beats version 7.1.1
https://github.com/elastic/beats/compare/v7.1.0...v7.1.1[View commits]
@ -104,6 +729,7 @@ https://github.com/elastic/beats/compare/v7.0.0-rc1...v7.0.0-rc2[Check the HEAD
*Affecting all Beats*
- Fixed OS family classification in `add_host_metadata` for Amazon Linux, Raspbian, and RedHat Linux. {issue}9134[9134] {pull}11494[11494]
- Allow 'ilm.rollover_alias' to expand global fields like `agent.version`. {issue}12233[12233]
*Auditbeat*
@ -813,12 +1439,105 @@ https://github.com/elastic/beats/compare/v6.5.0...v7.0.0-alpha1[View commits]
- Added support to calculate certificates' fingerprints (MD5, SHA-1, SHA-256). {issue}8180[8180]
- Support new TLS version negotiation introduced in TLS 1.3. {issue}8647[8647].
[[release-notes-6.8.3]]
=== Beats version 6.8.3
https://github.com/elastic/beats/compare/v6.8.2...v6.8.3[View commits
==== Bugfixes
*Journalbeat*
- Iterate over journal correctly, so no duplicate entries are sent. {pull}12716[12716]
*Metricbeat*
- Fix panic in Redis Key metricset when collecting information from a removed key. {pull}13426[13426]
==== Added
*Metricbeat*
- Remove _nodes field from under cluster_stats as it's not being used. {pull}13010[13010]
- Collect license expiry date fields as well. {pull}11652[11652]
[[release-notes-6.8.2]]
=== Beats version 6.8.2
https://github.com/elastic/beats/compare/v6.8.1...v6.8.2[View commits]
==== Bugfixes
*Auditbeat*
- Process dataset: Do not show non-root warning on Windows. {pull}12740[12740]
*Filebeat*
- Skipping unparsable log entries from docker json reader {pull}12268[12268]
*Packetbeat*
- Limit memory usage of Redis replication sessions. {issue}12657[12657
[[release-notes-6.8.1]]
=== Beats version 6.8.1
https://github.com/elastic/beats/compare/v6.8.0...v6.8.1[View commits]
==== Bugfixes
*Affecting all Beats*
- Fixed a memory leak when using the add_process_metadata processor under Windows. {pull}12100[12100]
*Auditbeat*
- Package dataset: Log error when Homebrew is not installed. {pull}11667[11667]
- Process dataset: Fixed a memory leak under Windows. {pull}12100[12100]
- Login dataset: Fix re-read of utmp files. {pull}12028[12028]
- Package dataset: Fixed a crash inside librpm after Auditbeat has been running for a while. {issue}12147[12147] {pull}12168[12168]
- Fix direction of incoming IPv6 sockets. {pull}12248[12248]
- Package dataset: Auto-detect package directories. {pull}12289[12289]
- System module: Start system module without host ID. {pull}12373[12373]
- Host dataset: Fix reboot detection logic. {pull}12591[12591]
*Filebeat*
- Fix goroutine leak happening when harvesters are dynamically stopped. {pull}11263[11263]
- Fix initialization of the TCP input logger. {pull}11605[11605]
- Fix goroutine leak caused on initialization failures of log input. {pull}12125[12125]
- Fix memory leak in Filebeat pipeline acker. {pull}12063[12063]
- Fix goroutine leak on non-explicit finalization of log input. {pull}12164[12164]
- When TLS is configured for the TCP input and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584]
*Metricbeat*
- Avoid generating hints-based configuration with empty hosts when no exposed port is suitable for the hosts hint. {issue}8264[8264] {pull}12086[12086]
- Fix direction of incoming IPv6 sockets. {pull}12248[12248]
- Validate that kibana/status metricset cannot be used when xpack is enabled. {pull}12264[12264]
- In the kibana/stats metricset, only log error (don't also index it) if xpack is enabled. {pull}12353[12353]
- The `elasticsearch/index_summary` metricset gracefully handles an empty Elasticsearch cluster when `xpack.enabled: true` is set. {pull}12489[12489] {issue}12487[12487]
- When TLS is configured for the http metricset and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584]
*Packetbeat*
- Fixed a memory leak when using process monitoring under Windows. {pull}12100[12100]
- Improved debug logging efficiency in PGQSL module. {issue}12150[12150]
==== Added
*Auditbeat*
- Add support to the system package dataset for the SUSE OS family. {pull}11634[11634]
*Metricbeat*
- Add validation for elasticsearch and kibana modules' metricsets when xpack.enabled is set to true. {pull}12386[12386]
[[release-notes-6.8.0]]
=== Beats version 6.8.0
* Updates to support changes to licensing of security features.
+
Some Elastic Stack security features, such as encrypted communications, file and native authentication, and
Some Elastic Stack security features, such as encrypted communications, file and native authentication, and
role-based access control, are now available in more subscription levels. For details, see https://www.elastic.co/subscriptions.
[[release-notes-6.7.2]]

View File

@ -11,37 +11,25 @@ https://github.com/elastic/beats/compare/v7.0.0-alpha2...master[Check the HEAD d
*Affecting all Beats*
- Update to Golang 1.12.1. {pull}11330[11330]
- Update to Golang 1.12.4. {pull}11782[11782]
- Update to ECS 1.0.1. {pull}12284[12284] {pull}12317[12317]
- Default of output.kafka.metadata.full is set to false by now. This reduced the amount of metadata to be queried from a kafka cluster. {pull}12738[12738]
- Disable Alibaba Cloud and Tencent Cloud metadata providers by default. {pull}13812[12812]
*Auditbeat*
- Auditd module: Normalized value of `event.category` field from `user-login` to `authentication`. {pull}11432[11432]
- Auditd module: Unset `auditd.session` and `user.audit.id` fields are removed from audit events. {issue}11431[11431] {pull}11815[11815]
- Socket dataset: Exclude localhost by default {pull}11993[11993]
*Filebeat*
- Add read_buffer configuration option. {pull}11739[11739]
- `convert_timezone` option is removed and locale is always added to the event so timezone is used when parsing the timestamp, this behaviour can be overriden with processors. {pull}12410[12410]
*Heartbeat*
- Removed the `add_host_metadata` and `add_cloud_metadata` processors from the default config. These don't fit well with ECS for Heartbeat and were rarely used.
*Journalbeat*
*Metricbeat*
- Add new option `OpMultiplyBuckets` to scale histogram buckets to avoid decimal points in final events {pull}10994[10994]
- system/raid metricset now uses /sys/block instead of /proc/mdstat for data. {pull}11613[11613]
- kubernetes.container.cpu.limit.cores and kubernetes.container.cpu.requests.cores are now floats. {issue}11975[11975]
*Packetbeat*
- Add support for mongodb opcode 2013 (OP_MSG). {issue}6191[6191] {pull}8594[8594]
- NFSv4: Always use opname `ILLEGAL` when failed to match request to a valid nfs operation. {pull}11503[11503]
*Winlogbeat*
@ -51,246 +39,65 @@ https://github.com/elastic/beats/compare/v7.0.0-alpha2...master[Check the HEAD d
*Affecting all Beats*
- Fix typo in TLS renegotiation configuration and setting the option correctly {issue}10871[10871], {pull}12354[12354]
- Ensure all beat commands respect configured settings. {pull}10721[10721]
- Add missing fields and test cases for libbeat add_kubernetes_metadata processor. {issue}11133[11133], {pull}11134[11134]
- decode_json_field: process objects and arrays only {pull}11312[11312]
- decode_json_field: do not process arrays when flag not set. {pull}11318[11318]
- Report faulting file when config reload fails. {pull}11304[11304]
- Fix a typo in libbeat/outputs/transport/client.go by updating `c.conn.LocalAddr()` to `c.conn.RemoteAddr()`. {pull}11242[11242]
- Management configuration backup file will now have a timestamps in their name. {pull}11034[11034]
- [CM] Parse enrollment_token response correctly {pull}11648[11648]
- Not hiding error in case of http failure using elastic fetcher {pull}11604[11604]
- Escape BOM on JsonReader before trying to decode line {pull}11661[11661]
- Fix matching of string arrays in contains condition. {pull}11691[11691]
- Replace wmi queries with win32 api calls as they were consuming CPU resources {issue}3249[3249] and {issue}11840[11840]
- Fix a race condition with the Kafka pipeline client, it is possible that `Close()` get called before `Connect()` . {issue}11945[11945]
- Fix queue.spool.write.flush.events config type. {pull}12080[12080]
- Fixed a memory leak when using the add_process_metadata processor under Windows. {pull}12100[12100]
- Fix of docker json parser for missing "log" jsonkey in docker container's log {issue}11464[11464]
- Fixed Beat ID being reported by GET / API. {pull}12180[12180]
- Fixed setting bulk max size in kafka output. {pull}12254[12254]
- Add host.os.codename to fields.yml. {pull}12261[12261]
- Fix `@timestamp` being duplicated in events if `@timestamp` is set in a
processor (or by any code utilizing `PutValue()` on a `beat.Event`).
- Fix leak in script processor when using Javascript functions in a processor chain. {pull}12600[12600]
- Add additional nil pointer checks to Docker client code to deal with vSphere Integrated Containers {pull}12628[12628]
- Fix Central Management enroll under Windows {issue}12797[12797] {pull}12799[12799]
- Fixed a crash under Windows when fetching processes information. {pull}12833[12833]
- Fix seccomp policy preventing some features to function properly on 32bit Linux systems. {issue}12990[12990] {pull}13008[13008]
*Auditbeat*
- Process dataset: Fixed a memory leak under Windows. {pull}12100[12100]
- Login dataset: Fix re-read of utmp files. {pull}12028[12028]
- Package dataset: Fixed a crash inside librpm after Auditbeat has been running for a while. {issue}12147[12147] {pull}12168[12168]
- Fix formatting of config files on macOS and Windows. {pull}12148[12148]
- Fix direction of incoming IPv6 sockets. {pull}12248[12248]
- Package dataset: Close librpm handle. {pull}12215[12215]
- Package dataset: Auto-detect package directories. {pull}12289[12289]
- Package dataset: Improve dpkg parsing. {pull}12325[12325]
- System module: Start system module without host ID. {pull}12373[12373]
- Host dataset: Fix reboot detection logic. {pull}12591[12591]
- Add syscalls used by librpm for the system/package dataset to the default Auditbeat seccomp policy. {issue}12578[12578] {pull}12617[12617]
- Process dataset: Do not show non-root warning on Windows. {pull}12740[12740]
- Host dataset: Export Host fields to gob encoder. {pull}12940[12940]
*Filebeat*
- Add support for Cisco syslog format used by their switch. {pull}10760[10760]
- Cover empty request data, url and version in Apache2 module{pull}10730[10730]
- Fix registry entries not being cleaned due to race conditions. {pull}10747[10747]
- Improve detection of file deletion on Windows. {pull}10747[10747]
- Add missing Kubernetes metadata fields to Filebeat CoreDNS module, and fix a documentation error. {pull}11591[11591]
- Reduce memory usage if long lines are truncated to fit `max_bytes` limit. The line buffer is copied into a smaller buffer now. This allows the runtime to release unused memory earlier. {pull}11524[11524]
- Fix memory leak in Filebeat pipeline acker. {pull}12063[12063]
- Fix goroutine leak caused on initialization failures of log input. {pull}12125[12125]
- Fix goroutine leak on non-explicit finalization of log input. {pull}12164[12164]
- Skipping unparsable log entries from docker json reader {pull}12268[12268]
- Parse timezone in PostgreSQL logs as part of the timestamp {pull}12338[12338]
- Load correct pipelines when system module is configured in modules.d. {pull}12340[12340]
- Fix timezone offset parsing in system/syslog. {pull}12529[12529]
- When TLS is configured for the TCP input and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584]
- Apply `max_message_size` to incoming message buffer. {pull}11966[11966]
- Syslog input will now omit the `process` object from events if it is empty. {pull}12700[12700]
- panw module: Use geo.name instead of geo.country_iso_code for free-form location. {issue}13272[13272]
*Heartbeat*
- Fix NPEs / resource leaks when executing config checks. {pull}11165[11165]
- Fix duplicated IPs on `mode: all` monitors. {pull}12458[12458]
*Journalbeat*
- Use backoff when no new events are found. {pull}11861[11861]
- Iterate over journal correctly, so no duplicate entries are sent. {pull}12716[12716]
- Preserve host name when reading from remote journal. {pull}12714[12714]
*Metricbeat*
- Change diskio metrics retrieval method (only for Windows) from wmi query to DeviceIOControl function using the IOCTL_DISK_PERFORMANCE control code {pull}11635[11635]
- Call GetMetricData api per region instead of per instance. {issue}11820[11820] {pull}11882[11882]
- Update documentation with cloudwatch:ListMetrics permission. {pull}11987[11987]
- Check permissions in system socket metricset based on capabilities. {pull}12039[12039]
- Get process information from sockets owned by current user when system socket metricset is run without privileges. {pull}12039[12039]
- Avoid generating hints-based configuration with empty hosts when no exposed port is suitable for the hosts hint. {issue}8264[8264] {pull}12086[12086]
- Fixed a socket leak in the postgresql module under Windows when SSL is disabled on the server. {pull}11393[11393]
- Change some field type from scaled_float to long in aws module. {pull}11982[11982]
- Fixed RabbitMQ `queue` metricset gathering when `consumer_utilisation` is set empty at the metrics source {pull}12089[12089]
- Fix direction of incoming IPv6 sockets. {pull}12248[12248]
- Refactored Windows perfmon metricset: replaced method to retrieve counter paths with PdhExpandWildCardPathW, separated code by responsibility, removed unused functions {pull}12212[12212]
- Validate that kibana/status metricset cannot be used when xpack is enabled. {pull}12264[12264]
- Ignore prometheus metrics when their values are NaN or Inf. {pull}12084[12084] {issue}10849[10849]
- In the kibana/stats metricset, only log error (don't also index it) if xpack is enabled. {pull}12265[12265]
- Fix an issue listing all processes when run under Windows as a non-privileged user. {issue}12301[12301] {pull}12475[12475]
- The `elasticsearch/index_summary` metricset gracefully handles an empty Elasticsearch cluster when `xpack.enabled: true` is set. {pull}12489[12489] {issue}12487[12487]
- When TLS is configured for the http metricset and a `certificate_authorities` is configured we now default to `required` for the `client_authentication`. {pull}12584[12584]
- Reuse connections in PostgreSQL metricsets. {issue}12504[12504] {pull}12603[12603]
- PdhExpandWildCardPathW will not expand counter paths in 32 bit windows systems, workaround will use a different function.{issue}12590[12590]{pull}12622[12622]
- In the elasticsearch/node_stats metricset, if xpack is enabled, make parsing of ES node load average optional as ES on Windows doesn't report load average. {pull}12866[12866]
- Fix incoherent behaviour in redis key metricset when keyspace is specified both in host URL and key pattern {pull}12913[12913]
- Fix connections leak in redis module {pull}12914[12914] {pull}12950[12950]
- Fix wrong uptime reporting by system/uptime metricset under Windows. {pull}12915[12915]
- Print errors that were being omitted in vSphere metricsets {pull}12816[12816]
- Ignore prometheus untyped metrics with NaN value. {issue}13750[13750] {pull}13790[13790]
*Packetbeat*
- Prevent duplicate packet loss error messages in HTTP events. {pull}10709[10709]
- Fixed a memory leak when using process monitoring under Windows. {pull}12100[12100]
- Improved debug logging efficiency in PGQSL module. {issue}12150[12150]
- Limit memory usage of Redis replication sessions. {issue}12657[12657]
*Winlogbeat*
*Functionbeat*
- Fix function name reference for Kinesis streams in CloudFormation templates {pull}11646[11646]
==== Added
*Affecting all Beats*
- Decouple Debug logging from fail_on_error logic for rename, copy, truncate processors {pull}12451[12451]
- Add an option to append to existing logs rather than always rotate on start. {pull}11953[11953]
- Add `network` condition to processors for matching IP addresses against CIDRs. {pull}10743[10743]
- Add if/then/else support to processors. {pull}10744[10744]
- Add `community_id` processor for computing network flow hashes. {pull}10745[10745]
- Add output test to kafka output {pull}10834[10834]
- Gracefully shut down on SIGHUP {pull}10704[10704]
- New processor: `copy_fields`. {pull}11303[11303]
- Add `error.message` to events when `fail_on_error` is set in `rename` and `copy_fields` processors. {pull}11303[11303]
- New processor: `truncate_fields`. {pull}11297[11297]
- Allow a beat to ship monitoring data directly to an Elasticsearch monitoring cluster. {pull}9260[9260]
- Updated go-seccomp-bpf library to v1.1.0 which updates syscall lists for Linux v5.0. {pull}NNNN[NNNN]
- Add `add_observer_metadata` processor. {pull}11394[11394]
- Add `decode_csv_fields` processor. {pull}11753[11753]
- Add `convert` processor for converting data types of fields. {issue}8124[8124] {pull}11686[11686]
- New `extract_array` processor. {pull}11761[11761]
- Add number of goroutines to reported metrics. {pull}12135[12135]
- Add `proxy_disable` output flag to explicitly ignore proxy environment variables. {issue}11713[11713] {pull}12243[12243]
- Processor `add_cloud_metadata` adds fields `cloud.account.id` and `cloud.image.id` for AWS EC2. {pull}12307[12307]
- Add configurable bulk_flush_frequency in kafka output. {pull}12254[12254]
- Add `decode_base64_field` processor for decoding base64 field. {pull}11914[11914]
- Add support for reading the `network.iana_number` field by default to the community_id processor. {pull}12701[12701]
- Add aws overview dashboard. {issue}11007[11007] {pull}12175[12175]
- Add `decompress_gzip_field` processor. {pull}12733[12733]
- Add `timestamp` processor for parsing time fields. {pull}12699[12699]
- Add Oracle Tablespaces Dashboard {pull}12736[12736]
- Add `providers` setting to `add_cloud_metadata` processor. {pull}13812[13812]
*Auditbeat*
- Auditd module: Add `event.outcome` and `event.type` for ECS. {pull}11432[11432]
- Process: Add file hash of process executable. {pull}11722[11722]
- Socket: Add network.transport and network.community_id. {pull}12231[12231]
- Host: Fill top-level host fields. {pull}12259[12259]
*Filebeat*
- Add more info to message logged when a duplicated symlink file is found {pull}10845[10845]
- Add option to configure docker input with paths {pull}10687[10687]
- Add Netflow module to enrich flow events with geoip data. {pull}10877[10877]
- Set `event.category: network_traffic` for Suricata. {pull}10882[10882]
- Allow custom default settings with autodiscover (for example, use of CRI paths for logs). {pull}12193[12193]
- Allow to disable hints based autodiscover default behavior (fetching all logs). {pull}12193[12193]
- Change Suricata module pipeline to handle `destination.domain` being set if a reverse DNS processor is used. {issue}10510[10510]
- Add the `network.community_id` flow identifier to field to the IPTables, Suricata, and Zeek modules. {pull}11005[11005]
- New Filebeat coredns module to ingest coredns logs. It supports both native coredns deployment and coredns deployment in kubernetes. {pull}11200[11200]
- New module for Cisco ASA logs. {issue}9200[9200] {pull}11171[11171]
- Added support for Cisco ASA fields to the netflow input. {pull}11201[11201]
- Configurable line terminator. {pull}11015[11015]
- Add Filebeat envoyproxy module. {pull}11700[11700]
- Add apache2(httpd) log path (`/var/log/httpd`) to make apache2 module work out of the box on Redhat-family OSes. {issue}11887[11887] {pull}11888[11888]
- Add support to new MongoDB additional diagnostic information {pull}11952[11952]
- New module `panw` for Palo Alto Networks PAN-OS logs. {pull}11999[11999]
- Add RabbitMQ module. {pull}12032[12032]
- Add new `container` input. {pull}12162[12162]
- Add timeouts on communication with docker daemon. {pull}12310[12310]
- `container` and `docker` inputs now support reading of labels and env vars written by docker JSON file logging driver. {issue}8358[8358]
- Add specific date processor to convert timezones so same pipeline can be used when convert_timezone is enabled or disabled. {pull}12253[12253]
- Add MSSQL module {pull}12079[12079]
- Add ISO8601 date parsing support for system module. {pull}12568[12568] {pull}12578[12579]
- Update Kubernetes deployment manifest to use `container` input. {pull}12632[12632]
- Use correct OS path separator in `add_kubernetes_metadata` to support Windows nodes. {pull}9205[9205]
- Add support for client addresses with port in Apache error logs {pull}12695[12695]
- Add `google-pubsub` input type for consuming messages from a Google Cloud Pub/Sub topic subscription. {pull}12746[12746]
- Add module for ingesting Cisco IOS logs over syslog. {pull}12748[12748]
- Add module for ingesting Google Cloud VPC flow logs. {pull}12747[12747]
- Report host metadata for Filebeat logs in Kubernetes. {pull}12790[12790]
*Heartbeat*
- Enable `add_observer_metadata` processor in default config. {pull}11394[11394]
*Journalbeat*
*Metricbeat*
- Add AWS SQS metricset. {pull}10684[10684] {issue}10053[10053]
- Add AWS s3_request metricset. {pull}10949[10949] {issue}10055[10055]
- Add s3_daily_storage metricset. {pull}10940[10940] {issue}10055[10055]
- Add `coredns` metricbeat module. {pull}10585[10585]
- Add SSL support for Metricbeat HTTP server. {pull}11482[11482] {issue}11457[11457]
- The `elasticsearch.index` metricset (with `xpack.enabled: true`) now collects `refresh.external_total_time_in_millis` fields from Elasticsearch. {pull}11616[11616]
- Allow module configurations to have variants {pull}9118[9118]
- Add `timeseries.instance` field calculation. {pull}10293[10293]
- Added new disk states and raid level to the system/raid metricset. {pull}11613[11613]
- Added `path_name` and `start_name` to service metricset on windows module {issue}8364[8364] {pull}11877[11877]
- Add check on object name in the counter path if the instance name is missing {issue}6528[6528] {pull}11878[11878]
- Add AWS cloudwatch metricset. {pull}11798[11798] {issue}11734[11734]
- Add `regions` in aws module config to specify target regions for querying cloudwatch metrics. {issue}11932[11932] {pull}11956[11956]
- Keep `etcd` followers members from reporting `leader` metricset events {pull}12004[12004]
- Add overview dashboard to Consul module {pull}10665[10665]
- New fields were added in the mysql/status metricset. {pull}12227[12227]
- Add Kubernetes metricset `proxy`. {pull}12312[12312]
- Add Kubernetes proxy dashboard to Kubernetes module {pull}12734[12734]
- Always report Pod UID in the `pod` metricset. {pull}12345[12345]
- Add Vsphere Virtual Machine operating system to `os` field in Vsphere virtualmachine module. {pull}12391[12391]
- Add validation for elasticsearch and kibana modules' metricsets when xpack.enabled is set to true. {pull}12386[12386]
- Add CockroachDB module. {pull}12467[12467]
- Add support for metricbeat modules based on existing modules (a.k.a. light modules) {issue}12270[12270] {pull}12465[12465]
- Add a system/entropy metricset {pull}12450[12450]
- Add kubernetes metricset `controllermanager` {pull}12409[12409]
- Add Kubernetes controller manager dashboard to Kubernetes module {pull}12744[12744]
- Allow redis URL format in redis hosts config. {pull}12408[12408]
- Add tags into ec2 metricset. {issue}[12263]12263 {pull}12372[12372]
- Add kubernetes metricset `scheduler` {pull}12521[12521]
- Add Kubernetes scheduler dashboard to Kubernetes module {pull}12749[12749]
- Add `beat` module. {pull}12181[12181] {pull}12615[12615]
- Collect tags for cloudwatch metricset in aws module. {issue}[12263]12263 {pull}12480[12480]
- Add AWS RDS metricset. {pull}11620[11620] {issue}10054[10054]
- Add Oracle Module {pull}11890[11890]
*Packetbeat*
*Functionbeat*
- New options to configure roles and VPC. {pull}11779[11779]
- Export automation templates used to create functions. {pull}11923[11923]
- Configurable Amazon endpoint. {pull}12369[12369]
*Winlogbeat*
- Add support for reading from .evtx files. {issue}4450[4450]
==== Deprecated
@ -298,8 +105,6 @@ https://github.com/elastic/beats/compare/v7.0.0-alpha2...master[Check the HEAD d
*Filebeat*
- `docker` input is deprecated in favour `container`. {pull}12162[12162]
- `postgresql.log.timestamp` field is deprecated in favour of `@timestamp`. {pull}12338[12338]
*Heartbeat*

View File

@ -220,8 +220,8 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
--------------------------------------------------------------------
Dependency: github.com/aws/aws-sdk-go-v2
Version: v2.0.0-preview.5
Revision: d52522b5f4b95591ff6528d7c54923951aadf099
Version: v0.9.0
Revision: 098e15df3044cf1b04a222c1c33c3e6135ac89f3
License type (autodetected): Apache-2.0
./vendor/github.com/aws/aws-sdk-go-v2/LICENSE.txt:
--------------------------------------------------------------------
@ -700,8 +700,8 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
--------------------------------------------------------------------
Dependency: github.com/elastic/ecs
Version: v1.0.1
Revision: ab5e966864a6e2d4bc9fd6e2343e8d7f05f648fb
Version: v1.1.0
Revision: cc1d96bf3f70a8e6af1e436a0283ef22b6af3dd2
License type (autodetected): Apache-2.0
./vendor/github.com/elastic/ecs/LICENSE.txt:
--------------------------------------------------------------------
@ -758,6 +758,40 @@ License type (autodetected): Apache-2.0
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/elastic/go-perf
Revision: 9bc9b58a3de9e63a1a8e27241ae3c61d3449782b
License type (autodetected): BSD-3-Clause
./vendor/github.com/elastic/go-perf/LICENSE:
--------------------------------------------------------------------
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: github.com/elastic/go-seccomp-bpf
Version: v1.1.0
@ -776,8 +810,8 @@ Elasticsearch, B.V. (https://www.elastic.co/).
--------------------------------------------------------------------
Dependency: github.com/elastic/go-structform
Version: v0.0.5
Revision: 1425975cf4eb470099fcf02cbe9389cf3a7028a3
Version: v0.0.6
Revision: a50e916a1b628ad1abc6f58b47668a2f60075ef1
License type (autodetected): Apache-2.0
./vendor/github.com/elastic/go-structform/LICENSE:
--------------------------------------------------------------------
@ -822,7 +856,8 @@ Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/elastic/go-windows
Revision: bb1581babc04d5cb29a2bfa7a9ac6781c730c8dd
Version: v1.0.1
Revision: 8c929792e70203792e3baf128c380e28754ae8b5
License type (autodetected): Apache-2.0
./vendor/github.com/elastic/go-windows/LICENSE.txt:
--------------------------------------------------------------------
@ -830,15 +865,15 @@ Apache License 2.0
-------NOTICE.txt-----
Elastic go-windows
Copyright 2017-2018 Elasticsearch B.V.
Copyright 2017-2019 Elasticsearch B.V.
This product includes software developed at
Elasticsearch, B.V. (https://www.elastic.co/).
--------------------------------------------------------------------
Dependency: github.com/elastic/gosigar
Version: HEAD
Revision: f48d9dc84bc636d361c33fab2d7d753b705fd373
Version: v0.10.5
Revision: 7aef3366157f2bfdf3e068f73ce7193573e88e0c
License type (autodetected): Apache-2.0
./vendor/github.com/elastic/gosigar/LICENSE:
--------------------------------------------------------------------
@ -854,16 +889,6 @@ This product includes a number of subcomponents with
separate copyright notices and license terms. Your use of these
subcomponents is subject to the terms and conditions of the
subcomponent's license, as noted in the LICENSE file.
--------------------------------------------------------------------
Dependency: github.com/ericchiang/k8s
Version: =v1.0.0/in-cluster-ipv6
Revision: 33b346590d1dd4eaac217471671f736bcdab492d
License type (autodetected): Apache-2.0
./vendor/github.com/ericchiang/k8s/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/fatih/color
Version: v1.5.0
@ -1517,6 +1542,48 @@ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
--------------------------------------------------------------------
Dependency: github.com/gogo/protobuf
Revision: 65acae22fc9d1fe290b33faa2bd64cdc20a463a0
License type (autodetected): BSD-3-Clause
./vendor/github.com/gogo/protobuf/LICENSE:
--------------------------------------------------------------------
Copyright (c) 2013, The GoGo Authors. All rights reserved.
Protocol Buffers for Go with Gadgets
Go support for Protocol Buffers - Google's data interchange format
Copyright 2010 The Go Authors. All rights reserved.
https://github.com/golang/protobuf
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: github.com/gogo/protobuf
Revision: 636bf0302bc95575d69441b25a2603156ffdddf1
@ -1677,6 +1744,49 @@ License type (autodetected): Apache-2.0
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/google/go-cmp
Revision: 1b316004397f1f336546ca058ddb5b95c41a8772
License type (autodetected): BSD-3-Clause
./vendor/github.com/google/go-cmp/LICENSE:
--------------------------------------------------------------------
Copyright (c) 2017 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: github.com/google/gofuzz
Revision: f140a6486e521aad38f5917de355cbf147cc0496
License type (autodetected): Apache-2.0
./vendor/github.com/google/gofuzz/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/google/uuid
Revision: 281f560d28af7174109514e936f94c2ab2cb2823
@ -1746,6 +1856,15 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: github.com/googleapis/gnostic
Revision: 25d8b0b6698593f520d9d8dc5a88e6b16ca9ecc0
License type (autodetected): Apache-2.0
./vendor/github.com/googleapis/gnostic/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/gorhill/cronexpr
Revision: d520615e531a6bf3fb69406b9eba718261285ec8
@ -1755,6 +1874,376 @@ License type (autodetected): Apache-2.0
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/hashicorp/go-uuid
Revision: 4f571afc59f3043a65f8fe6bf46d887b10a01d43
License type (autodetected): MPL-2.0
./vendor/github.com/hashicorp/go-uuid/LICENSE:
--------------------------------------------------------------------
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.
--------------------------------------------------------------------
Dependency: github.com/hashicorp/golang-lru
Revision: 59383c442f7d7b190497e9bb8fc17a48d06cd03f
@ -2204,6 +2693,40 @@ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: github.com/jcmturner/gofork
Revision: dc7c13fece037a4a36e2b3c69db4991498d30692
License type (autodetected): BSD-3-Clause
./vendor/github.com/jcmturner/gofork/LICENSE:
--------------------------------------------------------------------
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: github.com/jmespath/go-jmespath
Version: 0.2.2
@ -2242,6 +2765,34 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
--------------------------------------------------------------------
Dependency: github.com/json-iterator/go
Revision: 27518f6661eba504be5a7a9a9f6d9460d892ade3
License type (autodetected): MIT
./vendor/github.com/json-iterator/go/LICENSE:
--------------------------------------------------------------------
MIT License
Copyright (c) 2016 json-iterator
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
--------------------------------------------------------------------
Dependency: github.com/jstemmer/go-junit-report
Revision: 385fac0ced9acaae6dc5b39144194008ded00697
@ -2504,8 +3055,8 @@ SOFTWARE.
--------------------------------------------------------------------
Dependency: github.com/miekg/dns
Version: v1.0.8
Revision: 5a2b9fab83ff0f8bfc99684bd5f43a37abe560f1
Version: v1.1.15
Revision: b13675009d59c97f3721247d9efa8914e1866a5b
License type (autodetected): BSD-3-Clause
./vendor/github.com/miekg/dns/LICENSE:
--------------------------------------------------------------------
@ -2598,6 +3149,24 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
--------------------------------------------------------------------
Dependency: github.com/modern-go/concurrent
Revision: bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94
License type (autodetected): Apache-2.0
./vendor/github.com/modern-go/concurrent/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/modern-go/reflect2
Revision: 94122c33edd36123c84d5368cfb2b69df93a0ec8
License type (autodetected): Apache-2.0
./vendor/github.com/modern-go/reflect2/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/OneOfOne/xxhash
Revision: 74ace4fe5525ef62ce28d5093d6b0faaa6a575f3
@ -3123,8 +3692,8 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: github.com/Shopify/sarama
Version: v1.20.1
Revision: 03a43f93cd29dc549e6d9b11892795c206f9c38c
Version: v1.23.1
Revision: 46c83074a05474240f9620fb7c70fb0d80ca401a
License type (autodetected): MIT
./vendor/github.com/Shopify/sarama/LICENSE:
--------------------------------------------------------------------
@ -3438,6 +4007,24 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: github.com/xdg/scram
Revision: 7eeb5667e42c09cb51bf7b7c28aea8c56767da90
License type (autodetected): Apache-2.0
./vendor/github.com/xdg/scram/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/xdg/stringprep
Revision: 73f8eece6fdcd902c185bf651de50f3828bed5ed
License type (autodetected): Apache-2.0
./vendor/github.com/xdg/stringprep/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: github.com/yuin/gopher-lua
Revision: b402f3114ec730d8bddb074a6c137309f561aa78
@ -4205,6 +4792,42 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: gopkg.in/jcmturner/aescts.v1
Revision: f6abebb3171c4c1b1fea279cb7c7325020a26290
License type (autodetected): Apache-2.0
./vendor/gopkg.in/jcmturner/aescts.v1/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: gopkg.in/jcmturner/dnsutils.v1
Revision: 13eeb8d49ffb74d7a75784c35e4d900607a3943c
License type (autodetected): Apache-2.0
./vendor/gopkg.in/jcmturner/dnsutils.v1/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: gopkg.in/jcmturner/gokrb5.v7
Revision: 363118e62befa8a14ff01031c025026077fe5d6d
License type (autodetected): Apache-2.0
./vendor/gopkg.in/jcmturner/gokrb5.v7/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: gopkg.in/jcmturner/rpc.v1
Revision: 99a8ce2fbf8b8087b6ed12a37c61b10f04070043
License type (autodetected): Apache-2.0
./vendor/gopkg.in/jcmturner/rpc.v1/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: gopkg.in/mgo.v2
Revision: 3f83fa5005286a7fe593b055f0d7771a7dce4655
@ -4305,16 +4928,30 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: gopkg.in/yaml.v2
Revision: cd8b52f8269e0feb286dfeef29f8fe4d5b397e0b
Revision: 5420a8b6744d3b0345ab293f6fcba19c978f1183
License type (autodetected): Apache-2.0
./vendor/gopkg.in/yaml.v2/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
-------NOTICE-----
Copyright 2011-2016 Canonical Ltd.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--------------------------------------------------------------------
Dependency: gopkg.in/yaml.v2
Revision: cd8b52f8269e0feb286dfeef29f8fe4d5b397e0b
Revision: 5420a8b6744d3b0345ab293f6fcba19c978f1183
License type (autodetected): MIT
./vendor/gopkg.in/yaml.v2/LICENSE.libyaml:
--------------------------------------------------------------------
@ -4387,6 +5024,109 @@ Parts of this package were made available under the license covering
the Go language and all attended core libraries. That license follows.
--------------------------------------------------------------------------------
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------
Dependency: k8s.io/api
Revision: b90922c02518d683852c467209bbab0a76db36e0
License type (autodetected): Apache-2.0
./vendor/k8s.io/api/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: k8s.io/apimachinery
Revision: bfcf53abc9f82bad3e534fcb1c36599d3c989ebf
License type (autodetected): Apache-2.0
./vendor/k8s.io/apimachinery/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: k8s.io/client-go
Version: v12.0.0
Revision: 78d2af792babf2dd937ba2e2a8d99c753a5eda89
License type (autodetected): Apache-2.0
./vendor/k8s.io/client-go/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: k8s.io/klog
Revision: 6a023d6d0e0954feabd46dc2d3a6a2c3c991fe1a
License type (autodetected): Apache-2.0
./vendor/k8s.io/klog/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: k8s.io/utils
Revision: 3dccf664f023863740c508fb4284e49742bedfa4
License type (autodetected): Apache-2.0
./vendor/k8s.io/utils/LICENSE:
--------------------------------------------------------------------
Apache License 2.0
--------------------------------------------------------------------
Dependency: sigs.k8s.io/yaml
Revision: 4cd0c284b15f1735b8cc247df097d262b8903f9f
License type (autodetected): MIT
./vendor/sigs.k8s.io/yaml/LICENSE:
--------------------------------------------------------------------
The MIT License (MIT)
Copyright (c) 2014 Sam Ghods
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@ -1,20 +1,21 @@
### Documentation
# This is a Vagrantfile for Beats development.
#
# Boxes
# This is a Vagrantfile for Beats development and testing. These are unofficial
# environments to help developers test things in different environments.
#
# Notes
# =====
#
# win2012
# -------
# This box is used as a Windows development and testing environment for Beats.
# win2012, win2016, win2019
# -------------------------
#
# Usage and Features:
# - Two users exist: Administrator and Vagrant. Both have the password: vagrant
# - Use 'vagrant ssh' to open a Windows command prompt.
# - Use 'vagrant rdp' to open a Windows Remote Desktop session. Mac users must
# install the Microsoft Remote Desktop Client from the App Store.
# - There is a desktop shortcut labeled "Beats Shell" that opens a command prompt
# to C:\Gopath\src\github.com\elastic\beats where the code is mounted.
# To login install Microsoft Remote Desktop Client (available in Mac App Store).
# Then run 'vagrant rdp' and login as user/pass vagrant/vagrant. Or you can
# manually configure your RDP client to connect to the mapped 3389 port as shown
# by 'vagrant port win2019'.
#
# The provisioning currently does no install libpcap sources or a pcap driver
# (like npcap) so Packetbeat will not build/run without some manually setup.
#
# solaris
# -------------------
@ -25,38 +26,93 @@
# - Use gmake instead of make.
# - Folder syncing doesn't work well. Consider copying the files into the box or
# cloning the project inside the box.
###
# Read the branch's Go version from the .go-version file.
GO_VERSION = File.read(File.join(File.dirname(__FILE__), ".go-version")).strip
# Provisioning for Windows PowerShell
$winPsProvision = <<SCRIPT
echo 'Creating github.com\elastic in the GOPATH'
New-Item -itemtype directory -path "C:\\Gopath\\src\\github.com\\elastic" -force
echo "Symlinking C:\\Vagrant to C:\\Gopath\\src\\github.com\\elastic"
cmd /c mklink /d C:\\Gopath\\src\\github.com\\elastic\\beats \\\\vboxsvr\\vagrant
$gopath_beats = "C:\\Gopath\\src\\github.com\\elastic\\beats"
if (-Not (Test-Path $gopath_beats)) {
echo 'Creating github.com\\elastic in the GOPATH'
New-Item -itemtype directory -path "C:\\Gopath\\src\\github.com\\elastic" -force
echo "Symlinking C:\\Vagrant to C:\\Gopath\\src\\github.com\\elastic"
cmd /c mklink /d $gopath_beats \\\\vboxsvr\\vagrant
}
echo "Installing gvm to manage go version"
[Net.ServicePointManager]::SecurityProtocol = "tls12"
Invoke-WebRequest -URI https://github.com/andrewkroh/gvm/releases/download/v0.1.0/gvm-windows-amd64.exe -Outfile C:\Windows\System32\gvm.exe
C:\Windows\System32\gvm.exe --format=powershell #{GO_VERSION} | Invoke-Expression
go version
if (-Not (Get-Command "gvm" -ErrorAction SilentlyContinue)) {
echo "Installing gvm to manage go version"
[Net.ServicePointManager]::SecurityProtocol = "tls12"
Invoke-WebRequest -URI https://github.com/andrewkroh/gvm/releases/download/v0.2.1/gvm-windows-amd64.exe -Outfile C:\\Windows\\System32\\gvm.exe
C:\\Windows\\System32\\gvm.exe --format=powershell #{GO_VERSION} | Invoke-Expression
go version
echo "Configure environment variables"
[System.Environment]::SetEnvironmentVariable("GOROOT", "C:\\Users\\vagrant\\.gvm\\versions\\go#{GO_VERSION}.windows.amd64", [System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable("PATH", "$env:GOROOT\\bin;$env:PATH", [System.EnvironmentVariableTarget]::Machine)
echo "Configure Go environment variables"
[System.Environment]::SetEnvironmentVariable("GOPATH", "C:\\Gopath", [System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable("GOROOT", "C:\\Users\\vagrant\\.gvm\\versions\\go#{GO_VERSION}.windows.amd64", [System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable("PATH", "%GOROOT%\\bin;$env:PATH;C:\\Gopath\\bin", [System.EnvironmentVariableTarget]::Machine)
}
echo "Creating Beats Shell desktop shortcut"
$WshShell = New-Object -comObject WScript.Shell
$Shortcut = $WshShell.CreateShortcut("$Home\\Desktop\\Beats Shell.lnk")
$Shortcut.TargetPath = "cmd.exe"
$Shortcut.Arguments = '/c "SET GOROOT=C:\\Users\\vagrant\\.gvm\\versions\\go#{GO_VERSION}.windows.amd64&PATH=C:\\Users\\vagrant\\.gvm\\versions\\go#{GO_VERSION}.windows.amd64\\bin;%PATH%" && START'
$Shortcut.WorkingDirectory = "C:\\Gopath\\src\\github.com\\elastic\\beats"
$Shortcut.Save()
$shell_link = "$Home\\Desktop\\Beats Shell.lnk"
if (-Not (Test-Path $shell_link)) {
echo "Creating Beats Shell desktop shortcut"
$WshShell = New-Object -comObject WScript.Shell
$Shortcut = $WshShell.CreateShortcut($shell_link)
$Shortcut.TargetPath = "powershell.exe"
$Shortcut.Arguments = "-noexit -command '$gopath_beats'"
$Shortcut.WorkingDirectory = $gopath_beats
$Shortcut.Save()
}
echo "Disable automatic updates"
$AUSettings = (New-Object -com "Microsoft.Update.AutoUpdate").Settings
$AUSettings.NotificationLevel = 1
$AUSettings.Save()
Try {
echo "Disabling automatic updates"
$AUSettings = (New-Object -com "Microsoft.Update.AutoUpdate").Settings
$AUSettings.NotificationLevel = 1
$AUSettings.Save()
} Catch {
echo "Failed to disable automatic updates."
}
if (-Not (Get-Command "choco" -ErrorAction SilentlyContinue)) {
Set-ExecutionPolicy Bypass -Scope Process -Force
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
}
choco feature disable -n=showDownloadProgress
if (-Not (Get-Command "python" -ErrorAction SilentlyContinue)) {
echo "Installing python2"
choco install python2 -y -r
refreshenv
$env:PATH = "$env:PATH;C:\\Python27;C:\\Python27\\Scripts"
}
if (-Not (Get-Command "pip" -ErrorAction SilentlyContinue)) {
echo "Installing pip"
Invoke-WebRequest https://bootstrap.pypa.io/get-pip.py -OutFile get-pip.py
python get-pip.py -U --force-reinstall 2>&1 | %{ "$_" }
rm get-pip.py
Invoke-WebRequest
} else {
echo "Updating pip"
python -m pip install --upgrade pip 2>&1 | %{ "$_" }
}
if (-Not (Get-Command "virtualenv" -ErrorAction SilentlyContinue)) {
echo "Installing virtualenv"
python -m pip install virtualenv 2>&1 | %{ "$_" }
}
if (-Not (Get-Command "git" -ErrorAction SilentlyContinue)) {
echo "Installing git"
choco install git -y -r
}
if (-Not (Get-Command "gcc" -ErrorAction SilentlyContinue)) {
echo "Installing mingw (gcc)"
choco install mingw -y -r
}
SCRIPT
# Provisioning for Unix/Linux
@ -83,158 +139,148 @@ fi
SCRIPT
end
# Provision packages for Linux Debian.
def linuxDebianProvision()
return <<SCRIPT
#!/usr/bin/env bash
set -eio pipefail
apt-get update
apt-get install -y make gcc python-pip python-virtualenv git
SCRIPT
end
Vagrant.configure(2) do |config|
# Windows Server 2012 R2
config.vm.define "win2012", primary: true do |win2012|
win2012.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-win2012-r2-virtualbox-2016-10-28_1224.box"
win2012.vm.guest = :windows
config.vm.define "win2012", primary: true do |c|
c.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-win2012-r2-virtualbox-2016-10-28_1224.box"
c.vm.guest = :windows
# Communicator for windows boxes
win2012.vm.communicator = "winrm"
c.vm.communicator = "winrm"
# Port forward WinRM and RDP
win2012.vm.network :forwarded_port, guest: 22, host: 2222, id: "ssh", auto_correct: true
win2012.vm.network :forwarded_port, guest: 3389, host: 33389, id: "rdp", auto_correct: true
win2012.vm.network :forwarded_port, guest: 5985, host: 55985, id: "winrm", auto_correct: true
c.vm.network :forwarded_port, guest: 22, host: 2222, id: "ssh", auto_correct: true
c.vm.network :forwarded_port, guest: 3389, host: 33389, id: "rdp", auto_correct: true
c.vm.network :forwarded_port, guest: 5985, host: 55985, id: "winrm", auto_correct: true
win2012.vm.provision "shell", inline: $winPsProvision
c.vm.provision "shell", inline: $winPsProvision
end
config.vm.define "win2016", primary: true do |c|
c.vm.box = "StefanScherer/windows_2016"
c.vm.provision "shell", inline: $winPsProvision, privileged: false
end
config.vm.define "win2019", primary: true do |c|
c.vm.box = "StefanScherer/windows_2019"
c.vm.provision "shell", inline: $winPsProvision, privileged: false
end
# Solaris 11.2
config.vm.define "solaris", primary: true do |solaris|
solaris.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-solaris-11.2-virtualbox-2016-11-02_1603.box"
solaris.vm.network :forwarded_port, guest: 22, host: 2223, id: "ssh", auto_correct: true
config.vm.define "solaris", primary: true do |c|
c.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-solaris-11.2-virtualbox-2016-11-02_1603.box"
c.vm.network :forwarded_port, guest: 22, host: 2223, id: "ssh", auto_correct: true
solaris.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: $unixProvision, privileged: false
end
# FreeBSD 11.0
config.vm.define "freebsd", primary: true do |freebsd|
freebsd.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-freebsd-11.0-virtualbox-2016-11-02_1638.box"
freebsd.vm.network :forwarded_port, guest: 22, host: 2224, id: "ssh", auto_correct: true
config.vm.define "freebsd", primary: true do |c|
c.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-freebsd-11.0-virtualbox-2016-11-02_1638.box"
c.vm.network :forwarded_port, guest: 22, host: 2224, id: "ssh", auto_correct: true
# Must use NFS to sync a folder on FreeBSD and this requires a host-only network.
# To enable the /vagrant folder, set disabled to false and uncomment the private_network.
config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", :nfs => true, disabled: true
#config.vm.network "private_network", ip: "192.168.135.18"
c.vm.synced_folder ".", "/vagrant", id: "vagrant-root", :nfs => true, disabled: true
#c.vm.network "private_network", ip: "192.168.135.18"
freebsd.vm.hostname = "beats-tester"
freebsd.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.hostname = "beats-tester"
c.vm.provision "shell", inline: $unixProvision, privileged: false
end
# OpenBSD 5.9-stable
config.vm.define "openbsd", primary: true do |openbsd|
openbsd.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-openbsd-5.9-current-virtualbox-2016-11-02_2007.box"
openbsd.vm.network :forwarded_port, guest: 22, host: 2225, id: "ssh", auto_correct: true
config.vm.define "openbsd", primary: true do |c|
c.vm.box = "https://s3.amazonaws.com/beats-files/vagrant/beats-openbsd-5.9-current-virtualbox-2016-11-02_2007.box"
c.vm.network :forwarded_port, guest: 22, host: 2225, id: "ssh", auto_correct: true
config.vm.synced_folder ".", "/vagrant", type: "rsync", disabled: true
config.vm.provider :virtualbox do |vbox|
c.vm.synced_folder ".", "/vagrant", type: "rsync", disabled: true
c.vm.provider :virtualbox do |vbox|
vbox.check_guest_additions = false
vbox.functional_vboxsf = false
end
openbsd.vm.provision "shell", inline: $unixProvision, privileged: false
end
config.vm.define "precise64", primary: true do |c|
c.vm.box = "ubuntu/precise64"
c.vm.network :forwarded_port, guest: 22, host: 2226, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.synced_folder ".", "/vagrant", type: "virtualbox"
end
config.vm.define "precise32", primary: true do |c|
c.vm.box = "ubuntu/precise32"
c.vm.network :forwarded_port, guest: 22, host: 2226, id: "ssh", auto_correct: true
c.vm.network :forwarded_port, guest: 22, host: 2226, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision("386"), privileged: false
c.vm.synced_folder ".", "/vagrant", type: "virtualbox"
c.vm.provision "shell", inline: linuxDebianProvision
end
config.vm.define "centos6", primary: true do |c|
c.vm.box = "bento/centos-6.9"
c.vm.network :forwarded_port, guest: 22, host: 2229, id: "ssh", auto_correct: true
config.vm.define "precise64", primary: true do |c|
c.vm.box = "ubuntu/precise64"
c.vm.network :forwarded_port, guest: 22, host: 2227, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.provision "shell", inline: "yum install -y make gcc python-pip python-virtualenv git"
c.vm.synced_folder ".", "/vagrant", type: "virtualbox"
end
config.vm.define "fedora27", primary: true do |c|
c.vm.box = "bento/fedora-27"
c.vm.network :forwarded_port, guest: 22, host: 2227, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.provision "shell", inline: "dnf install -y make gcc python-pip python-virtualenv git"
c.vm.synced_folder ".", "/vagrant", type: "virtualbox"
end
config.vm.define "archlinux", primary: true do |c|
c.vm.box = "archlinux/archlinux"
c.vm.network :forwarded_port, guest: 22, host: 2228, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.provision "shell", inline: "pacman -Sy && pacman -S --noconfirm make gcc python-pip python-virtualenv git"
c.vm.synced_folder ".", "/vagrant", type: "virtualbox"
c.vm.provision "shell", inline: linuxDebianProvision
end
config.vm.define "ubuntu1804", primary: true do |c|
c.vm.box = "ubuntu/bionic64"
c.vm.network :forwarded_port, guest: 22, host: 2229, id: "ssh", auto_correct: true
c.vm.network :forwarded_port, guest: 22, host: 2228, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.provision "shell", inline: "apt-get update && apt-get install -y make gcc python-pip python-virtualenv git"
c.vm.synced_folder ".", "/vagrant", type: "virtualbox"
c.vm.provision "shell", inline: linuxDebianProvision
end
config.vm.define "sles12", primary: true do |c|
c.vm.box = "elastic/sles-12-x86_64"
c.vm.network :forwarded_port, guest: 22, host: 2230, id: "ssh", auto_correct: true
config.vm.define "centos6", primary: true do |c|
c.vm.box = "bento/centos-6.10"
c.vm.network :forwarded_port, guest: 22, host: 2229, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.provision "shell", inline: "pip install virtualenv"
c.vm.synced_folder ".", "/vagrant", type: "virtualbox"
end
# Windows Server 2016
config.vm.define "win2016", primary: true do |machine|
machine.vm.box = "elastic/windows-2016-x86_64"
machine.vm.provision "shell", inline: $winPsProvision
machine.vm.provider "virtualbox" do |v|
v.memory = 4096
end
c.vm.provision "shell", inline: "yum install -y make gcc python-pip python-virtualenv git rpm-devel"
end
config.vm.define "centos7", primary: true do |c|
c.vm.box = "bento/centos-7"
c.vm.network :forwarded_port, guest: 22, host: 2231, id: "ssh", auto_correct: true
c.vm.network :forwarded_port, guest: 22, host: 2230, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.provision "shell", inline: "yum install -y make gcc python-pip python-virtualenv git"
c.vm.synced_folder ".", "/vagrant", type: "virtualbox"
c.vm.provision "shell", inline: "yum install -y make gcc python-pip python-virtualenv git rpm-devel"
end
end
config.vm.define "fedora29", primary: true do |c|
c.vm.box = "bento/fedora-29"
c.vm.network :forwarded_port, guest: 22, host: 2231, id: "ssh", auto_correct: true
# -*- mode: ruby -*-
# vi: set ft=ruby :
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.provision "shell", inline: "dnf install -y make gcc python-pip python-virtualenv git rpm-devel"
end
config.vm.define "sles12", primary: true do |c|
c.vm.box = "elastic/sles-12-x86_64"
c.vm.network :forwarded_port, guest: 22, host: 2232, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.provision "shell", inline: "pip install virtualenv"
end
config.vm.define "archlinux", primary: true do |c|
c.vm.box = "archlinux/archlinux"
c.vm.network :forwarded_port, guest: 22, host: 2233, id: "ssh", auto_correct: true
c.vm.provision "shell", inline: $unixProvision, privileged: false
c.vm.provision "shell", inline: linuxGvmProvision, privileged: false
c.vm.provision "shell", inline: "pacman -Sy && pacman -S --noconfirm make gcc python-pip python-virtualenv git"
end
end

View File

@ -1,4 +1,4 @@
FROM golang:1.12.4
FROM golang:1.12.9
RUN \
apt-get update \

View File

@ -14,7 +14,7 @@
auditbeat.config.modules:
# Glob pattern for configuration reloading
path: ${path.config}/conf.d/*.yml
path: ${path.config}/modules.d/*.yml
# Period on which files under path should be checked for changes
reload.period: 10s

View File

@ -14,7 +14,7 @@
auditbeat.config.modules:
# Glob pattern for configuration reloading
path: ${path.config}/conf.d/*.yml
path: ${path.config}/modules.d/*.yml
# Period on which files under path should be checked for changes
reload.period: 10s
@ -152,7 +152,7 @@ auditbeat.modules:
#flush.min_events: 2048
# Maximum duration after which events are available to the outputs,
# if the number of events stored in the queue is < min_flush_events.
# if the number of events stored in the queue is < `flush.min_events`.
#flush.timeout: 1s
# The spool queue will store events in a local spool file, before
@ -1214,7 +1214,7 @@ logging.files:
#logging.json: false
#============================== Xpack Monitoring ===============================
#============================== X-Pack Monitoring ===============================
# Auditbeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
@ -1222,6 +1222,11 @@ logging.files:
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Auditbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.

View File

@ -160,7 +160,7 @@ processors:
# "publish", "service".
#logging.selectors: ["*"]
#============================== Xpack Monitoring ===============================
#============================== X-Pack Monitoring ===============================
# auditbeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
@ -168,6 +168,11 @@ processors:
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Auditbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.

View File

@ -19,7 +19,6 @@ services:
pid: host
cap_add:
- AUDIT_CONTROL
- AUDIT_READ
# This is a proxy used to block beats until all services are healthy.
# See: https://github.com/docker/compose/issues/4369

View File

@ -1,6 +1,5 @@
[float]
[[ulimit]]
=== {beatname_uc} fails to watch folders because too many files are open?
=== {beatname_uc} fails to watch folders because too many files are open
Because of the way file monitoring is implemented on macOS, you may see a
warning similar to the following:

View File

@ -1,8 +1,8 @@
[[faq]]
== Frequently asked questions
== Common problems
This section contains frequently asked questions about {beatname_uc}. Also check
out the
This section describes common problems you might encounter with
{beatname_uc}. Also check out the
https://discuss.elastic.co/c/beats/{beatname_lc}[{beatname_uc} discussion forum].
include::./faq-ulimit.asciidoc[]

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
= Auditbeat Reference
:libbeat-dir: ../../libbeat
:libbeat-dir: {docdir}/../../libbeat
include::{libbeat-dir}/docs/version.asciidoc[]
@ -13,7 +13,6 @@ include::{asciidoc-dir}/../../shared/attributes.asciidoc[]
:github_repo_name: beats
:discuss_forum: beats/{beatname_lc}
:beat_default_index_prefix: {beatname_lc}
:has_ml_jobs: yes
:deb_os:
:rpm_os:
:mac_os:

View File

@ -7,9 +7,4 @@ This section contains detailed information about the metric collecting modules
contained in {beatname_uc}. More details about each module can be found under
the links below.
//pass macro block used here to remove Edit links from modules documentation because it is generated
pass::[<?edit_url?>]
include::modules_list.asciidoc[]

File diff suppressed because one or more lines are too long

View File

@ -28,9 +28,14 @@ import (
auditbeat "github.com/elastic/beats/auditbeat/scripts/mage"
devtools "github.com/elastic/beats/dev-tools/mage"
// mage:import
"github.com/elastic/beats/dev-tools/mage/target/common"
)
func init() {
common.RegisterCheckDeps(Update)
devtools.BeatDescription = "Audit the activities of users and processes on your system."
}
@ -66,11 +71,6 @@ func CrossBuildGoDaemon() error {
return devtools.CrossBuildGoDaemon()
}
// Clean cleans all generated files and build artifacts.
func Clean() error {
return devtools.Clean()
}
// Package packages the Beat for distribution.
// Use SNAPSHOT=true to build snapshots.
// Use PLATFORMS to control the target platforms.
@ -149,16 +149,6 @@ func Docs() {
mg.Deps(auditbeat.ModuleDocs, auditbeat.FieldDocs)
}
// Fmt formats source code and adds file headers.
func Fmt() {
mg.Deps(devtools.Format)
}
// Check runs fmt and update then returns an error if any modifications are found.
func Check() {
mg.SerialDeps(devtools.Format, Update, devtools.Check)
}
// IntegTest executes integration tests (it uses Docker to run the tests).
func IntegTest() {
devtools.AddIntegTestUsage()

View File

@ -1,9 +1,6 @@
## Executions.
-a always,exit -F arch=b32 -S execve,execveat -k exec
## External access (warning: these can be expensive to audit).
-a always,exit -F arch=b32 -S accept4,bind,connect -F key=external-access
## Identity changes.
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity

View File

@ -7,9 +7,6 @@
## Executions.
-a always,exit -F arch=b64 -S execve,execveat -k exec
## External access (warning: these can be expensive to audit).
-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access
## Identity changes.
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity

View File

@ -2,7 +2,7 @@
"objects": [
{
"attributes": {
"description": "",
"description": "Command executions",
"kibanaSavedObjectMeta": {
"searchSourceJSON": {
"filter": [],
@ -13,7 +13,7 @@
}
},
"savedSearchId": "d382f5b0-c1c6-11e7-8995-936807a28b16-ecs",
"title": "Error Codes [Auditbeat Auditd Executions] ECS",
"title": "Error Codes [Auditbeat Auditd] ECS",
"uiStateJSON": {},
"version": 1,
"visState": {
@ -46,7 +46,7 @@
"legendPosition": "right",
"type": "pie"
},
"title": "Error Codes [Auditbeat Auditd Executions] ECS",
"title": "Error Codes [Auditbeat Auditd] ECS",
"type": "pie"
}
},
@ -121,7 +121,7 @@
}
},
"savedSearchId": "d382f5b0-c1c6-11e7-8995-936807a28b16-ecs",
"title": "Exe Name Tag Cloud [Auditbeat Auditd Executions] ECS",
"title": "Exe Name Tag Cloud [Auditbeat Auditd] ECS",
"uiStateJSON": {},
"version": 1,
"visState": {
@ -152,7 +152,7 @@
"orientation": "single",
"scale": "linear"
},
"title": "Exe Name Tag Cloud [Auditbeat Auditd Executions] ECS",
"title": "Exe Name Tag Cloud [Auditbeat Auditd] ECS",
"type": "tagcloud"
}
},
@ -251,7 +251,7 @@
},
{
"attributes": {
"description": "",
"description": "Overview of kernel executions",
"hits": 0,
"kibanaSavedObjectMeta": {
"searchSourceJSON": {
@ -333,4 +333,4 @@
}
],
"version": "6.2.4"
}
}

View File

@ -6,7 +6,7 @@
"kibanaSavedObjectMeta": {
"searchSourceJSON": {}
},
"title": "Event Actions [Auditbeat Auditd Overview] ECS",
"title": "Event Actions [Auditbeat Auditd] ECS",
"uiStateJSON": {},
"version": 1,
"visState": {
@ -65,7 +65,7 @@
"time_field": "@timestamp",
"type": "timeseries"
},
"title": "Event Actions [Auditbeat Auditd Overview] ECS",
"title": "Event Actions [Auditbeat Auditd] ECS",
"type": "metrics"
}
},
@ -280,4 +280,4 @@
}
],
"version": "6.2.4"
}
}

View File

@ -22,10 +22,12 @@
description: BLAKE2b-512 hash of the file.
- name: md5
overwrite: true
type: keyword
description: MD5 hash of the file.
- name: sha1
overwrite: true
type: keyword
description: SHA1 hash of the file.
@ -34,6 +36,7 @@
description: SHA224 hash of the file.
- name: sha256
overwrite: true
type: keyword
description: SHA256 hash of the file.
@ -58,6 +61,7 @@
description: SHA3_512 hash of the file.
- name: sha512
overwrite: true
type: keyword
description: SHA512 hash of the file.

View File

@ -376,7 +376,7 @@
"id": "3",
"params": {
"customLabel": "Path",
"field": "file.path.raw",
"field": "file.path",
"order": "desc",
"orderBy": "1",
"size": 10
@ -541,7 +541,7 @@
"id": "2",
"params": {
"customLabel": "File",
"field": "file.path.raw",
"field": "file.path",
"order": "desc",
"orderBy": "1",
"size": 1
@ -816,7 +816,7 @@
"id": "2",
"params": {
"customLabel": "Path",
"field": "file.path.raw",
"field": "file.path",
"order": "desc",
"orderBy": "1",
"size": 10
@ -872,7 +872,7 @@
"id": "2",
"params": {
"customLabel": "Path",
"field": "file.path.raw",
"field": "file.path",
"order": "desc",
"orderBy": "1",
"size": 10
@ -1152,4 +1152,4 @@
}
],
"version": "6.1.2"
}
}

View File

@ -32,5 +32,5 @@ func init() {
// AssetFileIntegrity returns asset data.
// This is the base64 encoded gzipped contents of module/file_integrity.
func AssetFileIntegrity() string {
return "eJyUkkFvozAUhO/8ivkDyQonoIjDSqy2FVXbU3rIDZn4BVsxEGGnDf++wqJNIkXCPvI0fJ438xY40pDhoDSVqrVU98oOEWCV1ZThWWnCy81ckNn36mRV12b4kGQIvCdYSTgo0sKgppZ6bkmgGqb5LRtNJ86alhGmH7IIWKDlDWWQ3MgIAOxwogx1351P7vvu2b9uBBTcSDLoDr/PLEdL40bGueK67nplZePwBrwVTvrJ9ZmcZCKNQ0kXULvvBAkIVZOxk24ZOdXV7dVvpfmRWFWyJP0hOeNHGr66XkyzO/P/3vLXJ1YtWJK6de/sRw/pq806lL7arH3pScxC6UnM5uiNSHyp7/+TOZqRPPbFbYs89uAx5h3qtsgZm81zZPqfwcicvwAjeUD52yL36H1klmHbO70fNygBp/fiBqZQ+uYQcPyO63H5RvIwqjczsLUkZn/8enPsoOYce767y0Wm3pZ3u/SR2e8AAAD//7nP8bw="
return "eJykk81u6jAQhfd5inkBuIohEcriSqnaKlXbFV2wQw4eYgsnRrYD5O2ruCk/ElJsusxo8p3jmTMT2GGXwVZIXIvGYqWF7SIAK6zEDF6FRHi7qjM0Gy32Vqgmgy+OBoFqBMsRtgIlM1Bhg5paZFB2Q/2aDbVircRpBMMPWQQwgYbWmAGnhkcAALbbYwaVVu3efd/I/nclgIIajgbU9iwz7S31LzLOFZWV0sLy2uEN0Ia51gOVLbqWgdQXOZ4Am41iyICJCo0d+qaR67q4vfgtJd0hKdckSX9JzvgOu6PSbKjdmH/6yN9fSDkhSeqee2M/ukufLeah9Nli7ktPYhJKT2IyRq9ZMhDUAfVRC4sZWN2ir9bnczKmYTiN/yayLPLYQ4UQ7wUsi5yQ0dn3zHNkHvfukSHDaUB8lkXukZyeuQ6biev34/qf0g/XbwYhR+S4vnMIOB/H9bgdw+mF+ng6vJUCd5nE5J/fNh07aJ+OPb7R04mn3pZXq/Se2e8AAAD//yR/DPc="
}

View File

@ -71,7 +71,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: auditbeat
image: docker.elastic.co/beats/auditbeat:7.3.0
image: docker.elastic.co/beats/auditbeat:7.4.1
args: [
"-c", "/etc/auditbeat.yml"
]

View File

@ -14,7 +14,6 @@ data:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
@ -62,7 +61,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.3.0
image: docker.elastic.co/beats/filebeat:7.4.1
args: [
"-c", "/etc/filebeat.yml",
"-e",

View File

@ -14,7 +14,6 @@ data:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:

View File

@ -110,7 +110,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:7.3.0
image: docker.elastic.co/beats/metricbeat:7.4.1
args: [
"-c", "/etc/metricbeat.yml",
"-e",
@ -248,7 +248,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:7.3.0
image: docker.elastic.co/beats/metricbeat:7.4.1
args: [
"-c", "/etc/metricbeat.yml",
"-e",

View File

@ -23,9 +23,12 @@ import (
"io/ioutil"
"log"
"net/url"
"os"
"path/filepath"
"time"
"github.com/pkg/errors"
"github.com/elastic/beats/libbeat/dashboards"
"github.com/elastic/beats/libbeat/kibana"
)
@ -84,6 +87,7 @@ func main() {
if err != nil {
log.Fatalf("Failed to export dashboards from YML file: %v", err)
}
log.Println("Done exporting dashboards from", *ymlFile)
return
}
@ -121,9 +125,14 @@ func exportSingleDashboard(client *kibana.Client, dashboard, output string) erro
return fmt.Errorf("failed to export the dashboard: %+v", err)
}
result = dashboards.DecodeExported(result)
if err = os.MkdirAll(filepath.Dir(output), 0755); err != nil {
return errors.Wrap(err, "failed to create directory for dashboard")
}
err = ioutil.WriteFile(output, []byte(result.StringToPrint()), dashboards.OutputPermission)
if err != nil {
return fmt.Errorf("failed to save the dashboards: %+v", err)
return fmt.Errorf("failed to save the dashboard: %+v", err)
}
return nil
}

View File

@ -143,17 +143,17 @@ func main() {
Imports: imports,
})
if err != nil {
log.Fatal("Failed executing template: %v", err)
log.Fatalf("Failed executing template: %v", err)
}
// Create the output directory.
if err = os.MkdirAll(filepath.Dir(outFile), 0755); err != nil {
log.Fatal("Failed to create output directory: %v", err)
log.Fatalf("Failed to create output directory: %v", err)
}
// Write the output file.
if err = ioutil.WriteFile(outFile, buf.Bytes(), 0644); err != nil {
log.Fatal("Failed writing output file: %v", err)
log.Fatalf("Failed writing output file: %v", err)
}
}
@ -226,7 +226,7 @@ func findImports() ([]string, error) {
func hasInitMethod(file string) bool {
f, err := os.Open(file)
if err != nil {
log.Fatalf("failed to read from %v: %v", file, err)
log.Fatalf("Failed to read from %v: %v", file, err)
}
defer f.Close()
@ -238,7 +238,7 @@ func hasInitMethod(file string) bool {
}
}
if err := scanner.Err(); err != nil {
log.Fatal("failed scanning %v: %v", file, err)
log.Fatalf("Failed scanning %v: %v", file, err)
}
return false
}

View File

@ -33,6 +33,7 @@ import (
// "go build" is invoked.
type BuildArgs struct {
Name string // Name of binary. (On Windows '.exe' is appended.)
InputFiles []string
OutputDir string
CGO bool
Static bool
@ -47,6 +48,9 @@ func DefaultBuildArgs() BuildArgs {
args := BuildArgs{
Name: BeatName,
CGO: build.Default.CgoEnabled,
LDFlags: []string{
"-s", // Strip all debug symbols from binary (does not affect Go stack traces).
},
Vars: map[string]string{
"github.com/elastic/beats/libbeat/version.buildTime": "{{ date }}",
"github.com/elastic/beats/libbeat/version.commit": "{{ commit }}",
@ -143,6 +147,10 @@ func Build(params BuildArgs) error {
args = append(args, MustExpand(strings.Join(ldflags, " ")))
}
if len(params.InputFiles) > 0 {
args = append(args, params.InputFiles...)
}
log.Println("Adding build environment vars:", env)
return sh.RunWith(env, "go", args...)
}

View File

@ -20,11 +20,14 @@ package mage
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"log"
"os"
"os/exec"
"path/filepath"
"regexp"
"runtime"
"strings"
@ -44,7 +47,7 @@ import (
func Check() error {
fmt.Println(">> check: Checking source code for common problems")
mg.Deps(GoVet, CheckNosetestsNotExecutable, CheckYAMLNotExecutable)
mg.Deps(GoVet, CheckNosetestsNotExecutable, CheckYAMLNotExecutable, CheckDashboardsFormat)
changes, err := GitDiffIndex()
if err != nil {
@ -185,3 +188,158 @@ func GoVet() error {
err := sh.RunV("go", "vet", "./...")
return errors.Wrap(err, "failed running go vet, please fix the issues reported")
}
// CheckDashboardsFormat checks the format of dashboards
func CheckDashboardsFormat() error {
dashboardSubDir := "/_meta/kibana/"
dashboardFiles, err := FindFilesRecursive(func(path string, _ os.FileInfo) bool {
if strings.HasPrefix(path, "vendor") {
return false
}
return strings.Contains(filepath.ToSlash(path), dashboardSubDir) && strings.HasSuffix(path, ".json")
})
if err != nil {
return errors.Wrap(err, "failed to find dashboards")
}
hasErrors := false
for _, file := range dashboardFiles {
d, err := ioutil.ReadFile(file)
if err != nil {
return errors.Wrapf(err, "failed to read dashboard file %s", file)
}
var dashboard Dashboard
err = json.Unmarshal(d, &dashboard)
if err != nil {
return errors.Wrapf(err, "failed to parse dashboard from %s", file)
}
module := moduleNameFromDashboard(file)
errs := dashboard.CheckFormat(module)
if len(errs) > 0 {
hasErrors = true
fmt.Printf(">> Dashboard format - %s:\n", file)
for _, err := range errs {
fmt.Println(" ", err)
}
}
}
if hasErrors {
return errors.New("there are format errors in dashboards")
}
return nil
}
func moduleNameFromDashboard(path string) string {
moduleDir := filepath.Clean(filepath.Join(filepath.Dir(path), "../../../.."))
return filepath.Base(moduleDir)
}
// Dashboard is a dashboard
type Dashboard struct {
Version string `json:"version"`
Objects []dashboardObject `json:"objects"`
}
type dashboardObject struct {
Type string `json:"type"`
Attributes struct {
Description string `json:"description"`
Title string `json:"title"`
KibanaSavedObjectMeta *struct {
SearchSourceJSON struct {
Index string `json:"index"`
} `json:"searchSourceJSON,omitempty"`
} `json:"kibanaSavedObjectMeta"`
VisState *struct {
Params struct {
Controls []struct {
IndexPattern string
} `json:"controls"`
} `json:"params"`
} `json:"visState,omitempty"`
} `json:"attributes"`
References []struct {
Type string `json:"type"`
ID string `json:"id"`
} `json:"references"`
}
var (
visualizationTitleRegexp = regexp.MustCompile(`^.+\[([^\s]+) (.+)\]( ECS)?$`)
dashboardTitleRegexp = regexp.MustCompile(`^\[([^\s]+) (.+)\].+$`)
)
// CheckFormat checks the format of a dashboard
func (d *Dashboard) CheckFormat(module string) []error {
checkObject := func(o *dashboardObject) error {
switch o.Type {
case "dashboard":
if o.Attributes.Description == "" {
return errors.Errorf("empty description on dashboard '%s'", o.Attributes.Title)
}
if err := checkTitle(dashboardTitleRegexp, o.Attributes.Title, module); err != nil {
return errors.Wrapf(err, "expected title with format '[%s Module] Some title', found '%s'", strings.Title(BeatName), o.Attributes.Title)
}
case "visualization":
if err := checkTitle(visualizationTitleRegexp, o.Attributes.Title, module); err != nil {
return errors.Wrapf(err, "expected title with format 'Some title [%s Module]', found '%s'", strings.Title(BeatName), o.Attributes.Title)
}
}
expectedIndexPattern := strings.ToLower(BeatName) + "-*"
if err := checkDashboardIndexPattern(expectedIndexPattern, o); err != nil {
return errors.Wrapf(err, "expected index pattern reference '%s'", expectedIndexPattern)
}
return nil
}
var errs []error
for _, o := range d.Objects {
if err := checkObject(&o); err != nil {
errs = append(errs, err)
}
}
return errs
}
func checkTitle(re *regexp.Regexp, title string, module string) error {
match := re.FindStringSubmatch(title)
if len(match) < 3 {
return errors.New("title doesn't match pattern")
}
beatTitle := strings.Title(BeatName)
if match[1] != beatTitle {
return errors.Errorf("expected: '%s', found: '%s'", beatTitle, match[1])
}
// Compare case insensitive, and ignore spaces and underscores in module names
replacer := strings.NewReplacer("_", "", " ", "")
expectedModule := replacer.Replace(strings.ToLower(module))
foundModule := replacer.Replace(strings.ToLower(match[2]))
if expectedModule != foundModule {
return errors.Errorf("expected module name (%s), found '%s'", module, match[2])
}
return nil
}
func checkDashboardIndexPattern(expectedIndex string, o *dashboardObject) error {
if objectMeta := o.Attributes.KibanaSavedObjectMeta; objectMeta != nil {
if index := objectMeta.SearchSourceJSON.Index; index != "" && index != expectedIndex {
return errors.Errorf("unexpected index pattern reference found in object meta: %s", index)
}
}
if visState := o.Attributes.VisState; visState != nil {
for _, control := range visState.Params.Controls {
if index := control.IndexPattern; index != "" && index != expectedIndex {
return errors.Errorf("unexpected index pattern reference found in visualization state: %s", index)
}
}
}
for _, reference := range o.References {
if reference.Type == "index-pattern" && reference.ID != expectedIndex {
return errors.Errorf("unexpected reference to index pattern %s", reference.ID)
}
}
return nil
}

View File

@ -44,15 +44,20 @@ func ExportDashboard() error {
return err
}
dashboardCmd := sh.RunCmd("go", "run", filepath.Join(beatsDir, "dev-tools/cmd/dashboards/export_dashboards.go"))
// TODO: This is currently hardcoded for KB 7, we need to figure out what we do for KB 8 if applicable
file := CWD("module", module, "_meta/kibana/7/dashboard", id+".json")
dashboardCmd := sh.RunCmd("go", "run",
filepath.Join(beatsDir, "dev-tools/cmd/dashboards/export_dashboards.go"),
"-output", file, "-dashboard", id,
)
args := []string{
"-output", file,
"-dashboard", id,
}
if kibanaURL := EnvOr("KIBANA_URL", ""); kibanaURL != "" {
args = append(args, "-kibana", kibanaURL)
}
return dashboardCmd()
return dashboardCmd(args...)
}
// ImportDashboards imports dashboards to Kibana using the Beat setup command.

View File

@ -122,14 +122,14 @@ func (b docsBuilder) AsciidocBook(opts ...DocsOption) error {
// Render HTML.
htmlDir := CWD("build/html_docs", params.name)
buildDocsScript := filepath.Join(cloneDir, "build_docs")
args := []string{
filepath.Join(cloneDir, "build_docs.pl"),
"--chunk=1",
"--doc", params.indexFile,
"--out", htmlDir,
}
fmt.Println(">> Building HTML docs at", filepath.Join(htmlDir, "index.html"))
if err := sh.Run("perl", args...); err != nil {
if err := sh.Run(buildDocsScript, args...); err != nil {
return err
}

View File

@ -114,6 +114,10 @@ func PythonAutopep8() error {
// AddLicenseHeaders adds license headers to .go files. It applies the
// appropriate license header based on the value of devtools.BeatLicense.
func AddLicenseHeaders() error {
if os.Getenv("CHECK_HEADERS_DISABLED") != "" {
return nil
}
fmt.Println(">> fmt - go-licenser: Adding missing headers")
if err := sh.Run("go", "get", GoLicenserImportPath); err != nil {

View File

@ -53,6 +53,12 @@ type GoTestArgs struct {
CoverageProfileFile string // Test coverage profile file (enables -cover).
}
// TestBinaryArgs are the arguments used when building binary for testing.
type TestBinaryArgs struct {
Name string // Name of the binary to build
InputFiles []string
}
func makeGoTestArgs(name string) GoTestArgs {
fileName := fmt.Sprintf("build/TEST-go-%s", strings.Replace(strings.ToLower(name), " ", "_", -1))
params := GoTestArgs{
@ -80,6 +86,14 @@ func DefaultGoTestIntegrationArgs() GoTestArgs {
return args
}
// DefaultTestBinaryArgs returns the default arguments for building
// a binary for testing.
func DefaultTestBinaryArgs() TestBinaryArgs {
return TestBinaryArgs{
Name: BeatName,
}
}
// GoTest invokes "go test" and reports the results to stdout. It returns an
// error if there was any failure executing the tests or if there were any
// test failures.
@ -329,15 +343,24 @@ func (s *GoTestSummary) String() string {
return strings.TrimRight(b.String(), "\n")
}
// BuildSystemTestBinary build a binary for testing that is instrumented for
// BuildSystemTestBinary runs BuildSystemTestGoBinary with default values.
func BuildSystemTestBinary() error {
return BuildSystemTestGoBinary(DefaultTestBinaryArgs())
}
// BuildSystemTestGoBinary build a binary for testing that is instrumented for
// testing and measuring code coverage. The binary is only instrumented for
// coverage when TEST_COVERAGE=true (default is false).
func BuildSystemTestBinary() error {
func BuildSystemTestGoBinary(binArgs TestBinaryArgs) error {
args := []string{
"test", "-c",
"-o", binArgs.Name + ".test",
}
if TestCoverage {
args = append(args, "-coverpkg", "./...")
}
if len(binArgs.InputFiles) > 0 {
args = append(args, binArgs.InputFiles...)
}
return sh.RunV("go", args...)
}

View File

@ -280,6 +280,10 @@ func (s PackageSpec) Clone() PackageSpec {
for k, v := range s.Files {
clone.Files[k] = v
}
clone.ExtraVars = make(map[string]string, len(s.ExtraVars))
for k, v := range s.ExtraVars {
clone.ExtraVars[k] = v
}
return clone
}
@ -343,6 +347,10 @@ func (s PackageSpec) Evaluate(args ...map[string]interface{}) PackageSpec {
return MustExpand(in, args...)
}
if s.evalContext == nil {
s.evalContext = map[string]interface{}{}
}
for k, v := range s.ExtraVars {
s.evalContext[k] = mustExpand(v)
}
@ -375,9 +383,6 @@ func (s PackageSpec) Evaluate(args ...map[string]interface{}) PackageSpec {
} else {
s.packageDir = filepath.Clean(mustExpand(s.packageDir))
}
if s.evalContext == nil {
s.evalContext = map[string]interface{}{}
}
s.evalContext["PackageDir"] = s.packageDir
evaluatedFiles := make(map[string]PackageFile, len(s.Files))
@ -651,6 +656,9 @@ func runFPM(spec PackageSpec, packageType PackageType) error {
"--name", spec.ServiceName,
"--architecture", spec.Arch,
)
if packageType == RPM {
args = append(args, "--rpm-rpmbuild-define", "_build_id_links none")
}
if spec.Version != "" {
args = append(args, "--version", spec.Version)
}

View File

@ -57,8 +57,9 @@ func (Dashboards) Import() error {
// directory.
//
// Required environment variables:
// - MODULE: Name of the module
// - ID: Dashboard ID
// - KIBANA_URL: URL of Kibana
// - MODULE: Name of the module
// - ID: Dashboard ID
func (Dashboards) Export() error {
return devtools.ExportDashboard()
}

View File

@ -15,21 +15,11 @@
// specific language governing permissions and limitations
// under the License.
package nginx
// +build mage
/*
Helper functions for testing used in the nginx metricsets
*/
package main
import (
"os"
// mage:import
_ "github.com/elastic/beats/dev-tools/mage/target/common"
)
func GetNginxEnvHost() string {
host := os.Getenv("NGINX_HOST")
if len(host) == 0 {
host = "127.0.0.1"
}
return host
}

View File

@ -24,6 +24,8 @@ check: mage
clean: mage
mage clean
fix-permissions:
.PHONY: fmt
fmt: mage
mage fmt
@ -38,6 +40,8 @@ help:
release: mage
mage package
stop-environment:
.PHONY: testsuite
testsuite: mage
-rm build/TEST-go-integration.out

View File

@ -123,6 +123,7 @@ func checkRPM(t *testing.T, file string) {
checkModulesDPresent(t, "/etc/", p)
checkMonitorsDPresent(t, "/etc", p)
checkSystemdUnitPermissions(t, p)
ensureNoBuildIDLinks(t, p)
}
func checkDeb(t *testing.T, file string, buf *bytes.Buffer) {
@ -425,6 +426,18 @@ func checkDockerUser(t *testing.T, p *packageFile, info *dockerInfo, expectRoot
})
}
// ensureNoBuildIDLinks checks for regressions related to
// https://github.com/elastic/beats/issues/12956.
func ensureNoBuildIDLinks(t *testing.T, p *packageFile) {
t.Run(fmt.Sprintf("%s no build_id links", p.Name), func(t *testing.T) {
for name := range p.Contents {
if strings.Contains(name, "/usr/lib/.build-id") {
t.Error("found unexpected /usr/lib/.build-id in package")
}
}
})
}
// Helpers
type packageFile struct {

View File

@ -15,6 +15,6 @@ New-Service -name {{.BeatName}} `
# Attempt to set the service to delayed start using sc config.
Try {
Start-Process -FilePath sc.exe -ArgumentList 'config {{.BeatName}} start=delayed-auto'
Start-Process -FilePath sc.exe -ArgumentList 'config {{.BeatName}} start= delayed-auto'
}
Catch { Write-Host -f red "An error occured setting the service to delayed start." }

View File

@ -25,7 +25,11 @@ You only need to sign the CLA once.
. Send a pull request! Push your changes to your fork of the repository and
https://help.github.com/articles/using-pull-requests[submit a pull request] using our
<<pr-review,pull request guidelines>>. In the pull request, describe what your changes do and mention
<<pr-review,pull request guidelines>>. New PRs go to the master branch. The Beats
core team will backport your PR if it is necessary.
In the pull request, describe what your changes do and mention
any bugs/issues related to the pull request. Please also add a changelog entry to
https://github.com/elastic/beats/blob/master/CHANGELOG.next.asciidoc[CHANGELOG.next.asciidoc].
@ -53,7 +57,7 @@ your workspace location, and make sure `$GOPATH/bin` is in your PATH.
The location where you clone is important. Make a directory structure under
`GOPATH` that matches the URL used for Elastic repositories, then clone the
beats repository under the new directory:
beats repository under the new directory:
[source,shell]
----------------------------------------------------------------------
@ -117,13 +121,13 @@ https://virtualenv.pypa.io/en/latest/installation.html[here]. Both of these comm
Beats is built using the `make release` target. By default, make will select from a limited number of preset build targets:
- darwin/amd64
- linux/386
- linux/386
- linux/amd64
- windows/386
- windows/386
- windows/amd64
You can change build targets using the `PLATFORMS` environment variable. Targets set with the `PLATFORMS` variable can either be a GOOS value, or a GOOS/arch pair.
For example, `linux` and `linux/amd64` are both valid targets. You can select multiple targets, and the `PLATFORMS` list is space delimited, for example `darwin windows` will build on all supported darwin and windows architectures.
You can change build targets using the `PLATFORMS` environment variable. Targets set with the `PLATFORMS` variable can either be a GOOS value, or a GOOS/arch pair.
For example, `linux` and `linux/amd64` are both valid targets. You can select multiple targets, and the `PLATFORMS` list is space delimited, for example `darwin windows` will build on all supported darwin and windows architectures.
In addition, you can add or remove from the list of build targets by prepending `+` or `-` to a given target. For example: `+bsd` or `-darwin`.
You can find the complete list of supported build targets with `go tool dist list`.

View File

@ -177,8 +177,8 @@ https://godoc.org/github.com/elastic/beats/libbeat/common#MapStr[MapStr API docs
===== Multi Fetching
`Event` can be called multiple times inside of the `Fetch` method for metricsets that might expose multiple events.
`Event` returns a bool that indicates if the metricset is already closed and no further events can be processed,
in which case `Fetch` should return immediately. If there is an error while processing one of many events,
`Event` returns a bool that indicates if the metricset is already closed and no further events can be processed,
in which case `Fetch` should return immediately. If there is an error while processing one of many events,
it can be published using the `mb.ReporterV2.Error` method, as opposed to returning an error value.
[float]
@ -272,7 +272,7 @@ var (
additionalSchema = s.Schema{
"second_string": c.Str("secondString"),
"second_int": ": c.Int("secondInt"),
"second_int": c.Int("secondInt"),
}
)

View File

@ -44,7 +44,7 @@ config file shipped by default.
[float]
===== docs.asciidoc
The `dosc.asciidoc` file contains the documentation about your module. During generation of the
The `docs.asciidoc` file contains the documentation about your module. During generation of the
documentation, the default config file will be appended to the docs. Use this file to describe your
module in more detail and to document specific configuration options.
@ -86,7 +86,8 @@ First we have to build the Docker image which is available for the modules. The
This steps assume you have checked out the Beats repository from Github and are inside `beats` directory. First, we have to enter in the `_meta` folder mentioned above and build the Docker image called `metricbeat-mysql`:
```
[source,bash]
----
$ cd metricbeat/module/mysql/_meta/
$ docker build -t metricbeat-mysql .
...
@ -96,36 +97,40 @@ Step 5/5 : COPY test.cnf /etc/mysql/conf.d/test.cnf
---> 002969e1d810
Successfully built 002969e1d810
Successfully tagged metricbeat-mysql:latest
```
----
Before we run the container we have just created, we also need to know which port to expose. The port is listed in the `metricbeat/{module}/_meta/env` file:
```
[source,bash]
----
$ cat env
MYSQL_DSN=root:test@tcp(mysql:3306)/
MYSQL_HOST=mysql
MYSQL_PORT=3306
```
----
As we see, the port is 3306. We now have all the information to start our MySQL service locally:
```
[source,bash]
----
$ docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret metricbeat-mysql
```
----
This starts the container and you can now use it for testing the MySQL module.
To run Metricbeat with the module we need to build the binary, enable the module first. The assumption is now that you are back in the `beats` folder path:
```
[source,bash]
----
$ cd metricbeat
$ mage build
$ ./metricbeat modules enable mysql
```
----
This will enable the module and rename file `metricbeat/modules.d/mysql.yml.disabled` to `metricbeat/modules.d/mysql.yml`. According to our https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-mysql.html[documentation] we should specify username and password to user MySQL. It's always a good idea to take a look at the docs to see also that a pre-built dashboard is also available. So tweaking the config a bit, this is how it looks like:
```yml
[source,yaml]
----
$ cat modules.d/mysql.yml
# Module: mysql
@ -138,6 +143,8 @@ $ cat modules.d/mysql.yml
period: 10s
# Host DSN should be defined as "user:pass@tcp(127.0.0.1:3306)/"
# or "unix(/var/lib/mysql/mysql.sock)/",
# or another DSN format supported by <https://github.com/Go-SQL-Driver/MySQL/>.
# The username and password can either be set in the DSN or using the username
# and password config options. Those specified in the DSN take precedence.
hosts: ["tcp(127.0.0.1:3306)/"]
@ -147,7 +154,7 @@ $ cat modules.d/mysql.yml
# Password of hosts. Empty by default.
password: secret
```
----
It's now sending data to your local Elasticsearch instance. If you need to modify the mysql config, adjust `modules.d/mysql.yml` and restart Metricbeat.

View File

@ -1,7 +1,7 @@
[[beats-reference]]
= Beats Developer Guide
:libbeat-dir: ../../libbeat
:libbeat-dir: {docdir}/../../libbeat
include::{libbeat-dir}/docs/version.asciidoc[]

View File

@ -44,7 +44,7 @@ branch ({branch} in the example below):
["source","sh",subs="attributes"]
----
cd $\{GOPATH\}/src/github.com/elastic/beats
cd ${GOPATH}/src/github.com/elastic/beats
git checkout {branch}
----

View File

@ -46,13 +46,17 @@ To import the dashboards, run the `setup` command.
./metricbeat setup
-------------------------
The `setup` phase loads:
The `setup` phase loads several dependencies, such as:
- Index mapping template in Elasticsearch
- Kibana dashboards
- Machine Learning jobs (if available)
- Machine Learning (ML) jobs
- Ingest pipelines
- ILM policy
For more details about the `setup` command, run the following:
The dependencies vary depending on the Beat you're setting up.
For more details about the `setup` command, see the command-line help. For example:
[source,shell]
----
@ -63,16 +67,18 @@ This command does initial setup of the environment:
* Index mapping template in Elasticsearch to ensure fields are mapped.
* Kibana dashboards (where available).
* ML jobs (where available).
* Ingest pipelines (where available).
* ILM policy (for Elasticsearch 6.5 and newer).
Usage:
filebeat setup [flags]
metricbeat setup [flags]
Flags:
--dashboards Setup dashboards only
--dashboards Setup dashboards
-h, --help help for setup
--machine-learning Setup machine learning job configurations only
--modules string List of enabled modules (comma separated)
--template Setup index template only
--index-management Setup all components related to Elasticsearch index management, including template, ilm policy and rollover alias
--machine-learning Setup machine learning job configurations
--pipelines Setup Ingest pipelines
----
The flags are useful when you don't want to load everything. For example, to

View File

@ -1,4 +1,4 @@
FROM golang:1.12.4
FROM golang:1.12.9
RUN \
apt-get update \

View File

@ -360,17 +360,14 @@ filebeat.inputs:
# default to `required` otherwise it will be set to `none`.
#ssl.client_authentication: "required"
#------------------------------ Docker input --------------------------------
# Experimental: Docker input reads and parses `json-file` logs from Docker
#- type: docker
#------------------------------ Container input --------------------------------
#- type: container
#enabled: false
# Combine partial lines flagged by `json-file` format
#combine_partials: true
# Paths for container logs that should be crawled and fetched.
#paths:
# -/var/lib/docker/containers/*/*.log
# Use this to read from all containers, replace * with a container id to read from one:
#containers:
# stream: all # can be all, stdout or stderr
# ids:
# - '*'
# Configure stream to filter to a specific stream: stdout, stderr or all (default)
#stream: all

View File

@ -11,7 +11,7 @@
# - condition:
# equals.docker.container.image: busybox
# config:
# - type: log
# - type: container
# paths:
# - /var/lib/docker/containers/${data.docker.container.id}/*.log

View File

@ -35,12 +35,6 @@
The input type from which the event was generated. This field is set to the value specified
for the `type` option in the input section of the Filebeat config file.
- name: event.sequence
type: long
required: false
description: >
The sequence number of this event.
- name: syslog.facility
type: long
required: false
@ -111,11 +105,6 @@
docker.attrs contains labels and environment variables written by docker's JSON File logging driver.
These fields are only available when they are configured in the logging driver options.
- name: event.code
type: keyword
description: >
The code for the log message.
- name: icmp.code
type: keyword
description: >
@ -131,22 +120,36 @@
description: >
IGMP type.
- name: source.as.number
type: long
description: >
Autonomous system number.
- name: kafka
type: group
fields:
- name: topic
type: keyword
description: >
Kafka topic
- name: destination.as.number
type: long
description: >
Autonomous system number.
- name: partition
type: long
description: >
Kafka partition number
- name: source.as.organization.name
type: keyword
description: >
Name of organization associated with the autonomous system.
- name: offset
type: long
description: >
Kafka offset of this message
- name: destination.as.organization.name
type: keyword
description: >
Name of organization associated with the autonomous system.
- name: key
type: keyword
description: >
Kafka key, corresponding to the Kafka value stored in the message
- name: block_timestamp
type: date
description: >
Kafka outer (compressed) block timestamp
- name: headers
type: array
description: >
An array of Kafka header strings for this message, in the form
"<key>: <value>".

View File

@ -22,6 +22,7 @@ import (
"regexp"
"github.com/elastic/beats/filebeat/fileset"
"github.com/elastic/beats/filebeat/harvester"
"github.com/elastic/beats/libbeat/autodiscover"
"github.com/elastic/beats/libbeat/autodiscover/builder"
"github.com/elastic/beats/libbeat/autodiscover/template"
@ -140,7 +141,12 @@ func (l *logHints) CreateConfig(event bus.Event) []*common.Config {
filesets := l.getFilesets(hints, module)
for fileset, conf := range filesets {
filesetConf, _ := common.NewConfigFrom(config)
filesetConf.SetString("containers.stream", -1, conf.Stream)
if inputType, _ := filesetConf.String("type", -1); inputType == harvester.ContainerType {
filesetConf.SetString("stream", -1, conf.Stream)
} else {
filesetConf.SetString("containers.stream", -1, conf.Stream)
}
moduleConf[fileset+".enabled"] = conf.Enabled
moduleConf[fileset+".input"] = filesetConf

View File

@ -473,6 +473,147 @@ func TestGenerateHints(t *testing.T) {
},
},
},
{
msg: "Hint with module should attach input to its filesets",
config: defaultCfg,
event: bus.Event{
"host": "1.2.3.4",
"kubernetes": common.MapStr{
"container": common.MapStr{
"name": "foobar",
"id": "abc",
},
},
"container": common.MapStr{
"name": "foobar",
"id": "abc",
},
"hints": common.MapStr{
"logs": common.MapStr{
"module": "apache2",
},
},
},
len: 1,
result: common.MapStr{
"module": "apache2",
"error": map[string]interface{}{
"enabled": true,
"input": map[string]interface{}{
"type": "container",
"stream": "all",
"paths": []interface{}{
"/var/lib/docker/containers/abc/*-json.log",
},
},
},
"access": map[string]interface{}{
"enabled": true,
"input": map[string]interface{}{
"type": "container",
"stream": "all",
"paths": []interface{}{
"/var/lib/docker/containers/abc/*-json.log",
},
},
},
},
},
{
msg: "Hint with module should honor defined filesets",
config: defaultCfg,
event: bus.Event{
"host": "1.2.3.4",
"kubernetes": common.MapStr{
"container": common.MapStr{
"name": "foobar",
"id": "abc",
},
},
"container": common.MapStr{
"name": "foobar",
"id": "abc",
},
"hints": common.MapStr{
"logs": common.MapStr{
"module": "apache2",
"fileset": "access",
},
},
},
len: 1,
result: common.MapStr{
"module": "apache2",
"access": map[string]interface{}{
"enabled": true,
"input": map[string]interface{}{
"type": "container",
"stream": "all",
"paths": []interface{}{
"/var/lib/docker/containers/abc/*-json.log",
},
},
},
"error": map[string]interface{}{
"enabled": false,
"input": map[string]interface{}{
"type": "container",
"stream": "all",
"paths": []interface{}{
"/var/lib/docker/containers/abc/*-json.log",
},
},
},
},
},
{
msg: "Hint with module should honor defined filesets with streams",
config: defaultCfg,
event: bus.Event{
"host": "1.2.3.4",
"kubernetes": common.MapStr{
"container": common.MapStr{
"name": "foobar",
"id": "abc",
},
},
"container": common.MapStr{
"name": "foobar",
"id": "abc",
},
"hints": common.MapStr{
"logs": common.MapStr{
"module": "apache2",
"fileset.stdout": "access",
"fileset.stderr": "error",
},
},
},
len: 1,
result: common.MapStr{
"module": "apache2",
"access": map[string]interface{}{
"enabled": true,
"input": map[string]interface{}{
"type": "container",
"stream": "stdout",
"paths": []interface{}{
"/var/lib/docker/containers/abc/*-json.log",
},
},
},
"error": map[string]interface{}{
"enabled": true,
"input": map[string]interface{}{
"type": "container",
"stream": "stderr",
"paths": []interface{}{
"/var/lib/docker/containers/abc/*-json.log",
},
},
},
},
},
}
for _, test := range tests {

View File

@ -0,0 +1,119 @@
// Licensed to Elasticsearch B.V. under one or more contributor
// license agreements. See the NOTICE file distributed with
// this work for additional information regarding copyright
// ownership. Elasticsearch B.V. licenses this file to you under
// the Apache License, Version 2.0 (the "License"); you may
// not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package channel
import (
"github.com/elastic/beats/libbeat/beat"
"github.com/elastic/beats/libbeat/common"
"github.com/elastic/beats/libbeat/processors"
)
// ConnectorFunc is an adapter for using ordinary functions as Connector.
type ConnectorFunc func(*common.Config, beat.ClientConfig) (Outleter, error)
type pipelineConnector struct {
parent *OutletFactory
pipeline beat.Pipeline
}
// Connect passes the cfg and the zero value of beat.ClientConfig to the underlying function.
func (fn ConnectorFunc) Connect(cfg *common.Config) (Outleter, error) {
return fn(cfg, beat.ClientConfig{})
}
// ConnectWith passes the configuration and the pipeline connection setting to the underlying function.
func (fn ConnectorFunc) ConnectWith(cfg *common.Config, clientCfg beat.ClientConfig) (Outleter, error) {
return fn(cfg, clientCfg)
}
func (c *pipelineConnector) Connect(cfg *common.Config) (Outleter, error) {
return c.ConnectWith(cfg, beat.ClientConfig{})
}
func (c *pipelineConnector) ConnectWith(cfg *common.Config, clientCfg beat.ClientConfig) (Outleter, error) {
config := inputOutletConfig{}
if err := cfg.Unpack(&config); err != nil {
return nil, err
}
var err error
var userProcessors beat.ProcessorList
userProcessors, err = processors.New(config.Processors)
if err != nil {
return nil, err
}
if lst := clientCfg.Processing.Processor; lst != nil {
if len(userProcessors.All()) == 0 {
userProcessors = lst
} else if orig := lst.All(); len(orig) > 0 {
newLst := processors.NewList(nil)
newLst.List = append(newLst.List, lst, userProcessors)
userProcessors = newLst
}
}
setOptional := func(to common.MapStr, key string, value string) {
if value != "" {
to.Put(key, value)
}
}
meta := clientCfg.Processing.Meta.Clone()
fields := clientCfg.Processing.Fields.Clone()
serviceType := config.ServiceType
if serviceType == "" {
serviceType = config.Module
}
setOptional(meta, "pipeline", config.Pipeline)
setOptional(fields, "fileset.name", config.Fileset)
setOptional(fields, "service.type", serviceType)
setOptional(fields, "input.type", config.Type)
if config.Module != "" {
event := common.MapStr{"module": config.Module}
if config.Fileset != "" {
event["dataset"] = config.Module + "." + config.Fileset
}
fields["event"] = event
}
mode := clientCfg.PublishMode
if mode == beat.DefaultGuarantees {
mode = beat.GuaranteedSend
}
// connect with updated configuration
clientCfg.PublishMode = mode
clientCfg.Processing.EventMetadata = config.EventMetadata
clientCfg.Processing.Meta = meta
clientCfg.Processing.Fields = fields
clientCfg.Processing.Processor = userProcessors
client, err := c.pipeline.ConnectWith(clientCfg)
if err != nil {
return nil, err
}
outlet := newOutlet(client, c.parent.wgEvents)
if c.parent.done != nil {
return CloseOnSignal(outlet, c.parent.done), nil
}
return outlet, nil
}

View File

@ -82,75 +82,12 @@ func NewOutletFactory(
// Inputs and all harvesters use the same pipeline client instance.
// This guarantees ordering between events as required by the registrar for
// file.State updates
func (f *OutletFactory) Create(p beat.Pipeline, cfg *common.Config, dynFields *common.MapStrPointer) (Outleter, error) {
config := inputOutletConfig{}
if err := cfg.Unpack(&config); err != nil {
return nil, err
}
processors, err := processors.New(config.Processors)
if err != nil {
return nil, err
}
setMeta := func(to common.MapStr, key, value string) {
if value != "" {
to[key] = value
}
}
meta := common.MapStr{}
setMeta(meta, "pipeline", config.Pipeline)
fields := common.MapStr{}
setMeta(fields, "module", config.Module)
if config.Module != "" && config.Fileset != "" {
setMeta(fields, "dataset", config.Module+"."+config.Fileset)
}
if len(fields) > 0 {
fields = common.MapStr{
"event": fields,
}
}
if config.Fileset != "" {
fields.Put("fileset.name", config.Fileset)
}
if config.ServiceType != "" {
fields.Put("service.type", config.ServiceType)
} else if config.Module != "" {
fields.Put("service.type", config.Module)
}
if config.Type != "" {
fields.Put("input.type", config.Type)
}
client, err := p.ConnectWith(beat.ClientConfig{
PublishMode: beat.GuaranteedSend,
Processing: beat.ProcessingConfig{
EventMetadata: config.EventMetadata,
DynamicFields: dynFields,
Meta: meta,
Fields: fields,
Processor: processors,
},
Events: f.eventer,
})
if err != nil {
return nil, err
}
outlet := newOutlet(client, f.wgEvents)
if f.done != nil {
return CloseOnSignal(outlet, f.done), nil
}
return outlet, nil
func (f *OutletFactory) Create(p beat.Pipeline) Connector {
return &pipelineConnector{parent: f, pipeline: p}
}
func (*clientEventer) Closing() {}
func (*clientEventer) Closed() {}
func (*clientEventer) Published() {}
func (c *clientEventer) FilteredOut(_ beat.Event) {}
func (c *clientEventer) DroppedOnPublish(_ beat.Event) {
c.wgEvents.Done()
}
func (e *clientEventer) Closing() {}
func (e *clientEventer) Closed() {}
func (e *clientEventer) Published() {}
func (e *clientEventer) FilteredOut(evt beat.Event) {}
func (e *clientEventer) DroppedOnPublish(evt beat.Event) { e.wgEvents.Done() }

View File

@ -18,20 +18,23 @@
package channel
import (
"github.com/elastic/beats/filebeat/util"
"github.com/elastic/beats/libbeat/beat"
"github.com/elastic/beats/libbeat/common"
)
// Factory is used to create a new Outlet instance
type Factory func(beat.Pipeline, *common.Config, *common.MapStrPointer) (Outleter, error)
type Factory func(beat.Pipeline) Connector
// Connector creates an Outlet connecting the event publishing with some internal pipeline.
type Connector func(*common.Config, *common.MapStrPointer) (Outleter, error)
// type Connector func(*common.Config, *common.MapStrPointer) (Outleter, error)
type Connector interface {
Connect(*common.Config) (Outleter, error)
ConnectWith(*common.Config, beat.ClientConfig) (Outleter, error)
}
// Outleter is the outlet for an input
type Outleter interface {
Close() error
Done() <-chan struct{}
OnEvent(data *util.Data) bool
OnEvent(beat.Event) bool
}

View File

@ -18,7 +18,6 @@
package channel
import (
"github.com/elastic/beats/filebeat/util"
"github.com/elastic/beats/libbeat/beat"
"github.com/elastic/beats/libbeat/common/atomic"
)
@ -53,16 +52,11 @@ func (o *outlet) Done() <-chan struct{} {
return o.done
}
func (o *outlet) OnEvent(d *util.Data) bool {
func (o *outlet) OnEvent(event beat.Event) bool {
if !o.isOpen.Load() {
return false
}
event := d.GetEvent()
if d.HasState() {
event.Private = d.GetState()
}
if o.wg != nil {
o.wg.Add(1)
}

View File

@ -20,32 +20,23 @@ package channel
import (
"sync"
"github.com/elastic/beats/filebeat/util"
"github.com/elastic/beats/libbeat/beat"
"github.com/elastic/beats/libbeat/common"
)
type subOutlet struct {
done chan struct{}
ch chan *util.Data
ch chan beat.Event
res chan bool
mutex sync.Mutex
closeOnce sync.Once
}
// ConnectTo creates a new Connector, combining a beat.Pipeline with an outlet Factory.
func ConnectTo(pipeline beat.Pipeline, factory Factory) Connector {
return func(cfg *common.Config, m *common.MapStrPointer) (Outleter, error) {
return factory(pipeline, cfg, m)
}
}
// SubOutlet create a sub-outlet, which can be closed individually, without closing the
// underlying outlet.
func SubOutlet(out Outleter) Outleter {
s := &subOutlet{
done: make(chan struct{}),
ch: make(chan *util.Data),
ch: make(chan beat.Event),
res: make(chan bool, 1),
}
@ -75,7 +66,7 @@ func (o *subOutlet) Done() <-chan struct{} {
return o.done
}
func (o *subOutlet) OnEvent(d *util.Data) bool {
func (o *subOutlet) OnEvent(event beat.Event) bool {
o.mutex.Lock()
defer o.mutex.Unlock()
@ -89,7 +80,7 @@ func (o *subOutlet) OnEvent(d *util.Data) bool {
case <-o.done:
return false
case o.ch <- d:
case o.ch <- event:
select {
case <-o.done:

View File

@ -22,7 +22,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/elastic/beats/filebeat/util"
"github.com/elastic/beats/libbeat/beat"
"github.com/elastic/beats/libbeat/tests/resources"
)
@ -31,7 +31,7 @@ type dummyOutletter struct {
c chan struct{}
}
func (o *dummyOutletter) OnEvent(event *util.Data) bool {
func (o *dummyOutletter) OnEvent(event beat.Event) bool {
return true
}

View File

@ -117,7 +117,7 @@ func (c *Crawler) startInput(
return nil
}
connector := channel.ConnectTo(pipeline, c.out)
connector := c.out(pipeline)
p, err := input.New(config, connector, c.beatDone, states, nil)
if err != nil {
return fmt.Errorf("Error while initializing input: %s", err)

View File

@ -12,6 +12,8 @@ services:
- ES_PORT=9200
- ES_USER=beats
- ES_PASS=testing
- KAFKA_HOST=kafka
- KAFKA_PORT=9092
- KIBANA_HOST=kibana
- KIBANA_PORT=5601
working_dir: /go/src/github.com/elastic/beats/filebeat
@ -27,6 +29,7 @@ services:
image: busybox
depends_on:
elasticsearch: { condition: service_healthy }
kafka: { condition: service_healthy }
kibana: { condition: service_healthy }
redis: { condition: service_healthy }
@ -35,6 +38,14 @@ services:
file: ${ES_BEATS}/testing/environments/${TESTING_ENVIRONMENT}.yml
service: elasticsearch
kafka:
build: ${ES_BEATS}/testing/environments/docker/kafka
expose:
- 9092
- 2181
environment:
- ADVERTISED_HOST=kafka
kibana:
extends:
file: ${ES_BEATS}/testing/environments/${TESTING_ENVIRONMENT}.yml

View File

@ -5,15 +5,14 @@ Filebeat supports templates for inputs and modules.
filebeat.autodiscover:
providers:
- type: docker
labels.dedot: true
templates:
- condition:
contains:
docker.container.image: redis
config:
- type: docker
containers.ids:
- "${data.docker.container.id}"
- type: container
paths:
- /var/lib/docker/containers/${data.docker.container.id}/*.log
exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
-------------------------------------------------------------------------------------
@ -27,7 +26,6 @@ If you are using modules, you can override the default input and use the docker
filebeat.autodiscover:
providers:
- type: docker
labels.dedot: true
templates:
- condition:
contains:
@ -36,7 +34,7 @@ filebeat.autodiscover:
- module: redis
log:
input:
type: docker
containers.ids:
- "${data.docker.container.id}"
type: container
paths:
- /var/lib/docker/containers/${data.docker.container.id}/*.log
-------------------------------------------------------------------------------------

View File

@ -42,18 +42,18 @@ Instead of using raw `docker` input, specifies the module to use to parse logs f
When module is configured, map container logs to module filesets. You can either configure
a single fileset like this:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
co.elastic.logs/fileset: access
-------------------------------------------------------------------------------------
-----
Or configure a fileset per stream in the container (stdout and stderr):
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
co.elastic.logs/fileset.stdout: access
co.elastic.logs/fileset.stderr: error
-------------------------------------------------------------------------------------
-----
[float]
===== `co.elastic.logs/raw`
@ -61,10 +61,10 @@ When an entire input/module configuration needs to be completely set the `raw` h
stringified JSON of the input configuration. `raw` overrides every other hint and can be used to create both a single or
a list of configurations.
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
co.elastic.logs/raw: "[{\"containers\":{\"ids\":[\"${data.container.id}\"]},\"multiline\":{\"negate\":\"true\",\"pattern\":\"^test\"},\"type\":\"docker\"}]"
-------------------------------------------------------------------------------------
-----
[float]
===== `co.elastic.logs/processors`
@ -75,11 +75,11 @@ of supported processors.
In order to provide ordering of the processor definition, numbers can be provided. If not, the hints builder will do
arbitrary ordering:
["source","yaml"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
co.elastic.logs/processors.1.dissect.tokenizer: "%{key1} %{key2}"
co.elastic.logs/processors.dissect.tokenizer: "%{key2} %{key1}"
-------------------------------------------------------------------------------------
-----
In the above sample the processor definition tagged with `1` would be executed first.
@ -88,18 +88,18 @@ In the above sample the processor definition tagged with `1` would be executed f
Kubernetes autodiscover provider supports hints in Pod annotations. To enable it just set `hints.enabled`:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
-------------------------------------------------------------------------------------
-----
You can configure the default config that will be launched when a new container is seen, like this:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
filebeat.autodiscover:
providers:
- type: kubernetes
@ -107,30 +107,30 @@ filebeat.autodiscover:
hints.default_config:
type: container
paths:
/var/log/container/*-${container.id}.log # CRI path
-------------------------------------------------------------------------------------
- /var/log/container/*-${container.id}.log # CRI path
-----
You can also disable default settings entirely, so only Pods annotated like `co.elastic.logs/enabled: true`
will be retrieved:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
hints.default_config.enabled: false
-------------------------------------------------------------------------------------
-----
You can annotate Kubernetes Pods with useful info to spin up {beatname_uc} inputs or modules:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
annotations:
co.elastic.logs/multiline.pattern: '^\['
co.elastic.logs/multiline.negate: true
co.elastic.logs/multiline.match: after
-------------------------------------------------------------------------------------
-----
[float]
@ -141,14 +141,14 @@ hint. For example, these hints configure multiline settings for all containers i
specific `exclude_lines` hint for the container called `sidecar`.
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
annotations:
co.elastic.logs/multiline.pattern: '^\['
co.elastic.logs/multiline.negate: true
co.elastic.logs/multiline.match: after
co.elastic.logs.sidecar/exclude_lines: '^DBG'
-------------------------------------------------------------------------------------
-----
@ -157,18 +157,18 @@ annotations:
Docker autodiscover provider supports hints in labels. To enable it just set `hints.enabled`:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
-------------------------------------------------------------------------------------
-----
You can configure the default config that will be launched when a new container is seen, like this:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
filebeat.autodiscover:
providers:
- type: docker
@ -176,29 +176,29 @@ filebeat.autodiscover:
hints.default_config:
type: container
paths:
/var/log/container/*-${container.id}.log # CRI path
-------------------------------------------------------------------------------------
- /var/log/container/*-${container.id}.log # CRI path
-----
You can also disable default settings entirely, so only containers labeled with `co.elastic.logs/enabled: true`
will be retrieved:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
hints.default_config.enabled: false
-------------------------------------------------------------------------------------
-----
You can label Docker containers with useful info to spin up {beatname_uc} inputs, for example:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
[source,yaml]
-----
co.elastic.logs/module: nginx
co.elastic.logs/fileset.stdout: access
co.elastic.logs/fileset.stderr: error
-------------------------------------------------------------------------------------
-----
The above labels configure {beatname_uc} to use the Nginx module to harvest logs for this container.
Access logs will be retrieved from stdout stream, and error logs from stderr.

View File

@ -10,9 +10,9 @@ filebeat.autodiscover:
equals:
kubernetes.namespace: kube-system
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
- type: container
paths:
- /var/log/container/*-${data.kubernetes.container.id}.log
exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
-------------------------------------------------------------------------------------
@ -34,7 +34,7 @@ filebeat.autodiscover:
- module: redis
log:
input:
type: docker
containers.ids:
- "${data.kubernetes.container.id}"
type: container
paths:
- /var/log/container/*-${data.kubernetes.container.id}.log
-------------------------------------------------------------------------------------

View File

@ -0,0 +1,33 @@
* Use AWS credentials in Filebeat configuration
+
[source,yaml]
----
filebeat.inputs:
- type: s3
queue_url: https://sqs.us-east-1.amazonaws.com/123/test-queue
access_key_id: '<access_key_id>'
secret_access_key: '<secret_access_key>'
session_token: '<session_token>'
----
+
or
+
[source,yaml]
----
filebeat.inputs:
- type: s3
queue_url: https://sqs.us-east-1.amazonaws.com/123/test-queue
access_key_id: '${AWS_ACCESS_KEY_ID:""}'
secret_access_key: '${AWS_SECRET_ACCESS_KEY:""}'
session_token: '${AWS_SESSION_TOKEN:""}'
----
* Use shared AWS credentials file
+
[source,yaml]
----
filebeat.inputs:
- type: s3
queue_url: https://sqs.us-east-1.amazonaws.com/123/test-queue
credential_profile_name: test-fb
----

View File

@ -41,7 +41,6 @@ The following topics describe how to configure Filebeat:
* <<configuration-logging>>
* <<using-environ-vars>>
* <<configuration-autodiscover>>
//* <<configuration-central-management>>
* <<yaml-tips>>
* <<regexp-support>>
* <<http-endpoint>>

View File

@ -1,20 +1,19 @@
[[faq]]
== Frequently asked questions
== Common problems
This section contains frequently asked questions about {beatname_uc}. Also check out the
https://discuss.elastic.co/c/beats/filebeat[{beatname_uc} discussion forum].
This section describes common problems you might encounter with
{beatname_uc}. Also check out the
https://discuss.elastic.co/c/beats/{beatname_lc}[{beatname_uc} discussion forum].
[float]
[[filebeat-network-volumes]]
=== Can't read log files from network volumes?
=== Can't read log files from network volumes
We do not recommend reading log files from network volumes. Whenever possible, install {beatname_uc} on the host machine and
send the log files directly from there. Reading files from network volumes (especially on Windows) can have unexpected side
effects. For example, changed file identifiers may result in {beatname_uc} reading a log file from scratch again.
[float]
[[filebeat-not-collecting-lines]]
=== {beatname_uc} isn't collecting lines from a file?
=== {beatname_uc} isn't collecting lines from a file
{beatname_uc} might be incorrectly configured or unable to send events to the output. To resolve the issue:
@ -31,9 +30,8 @@ it's publishing events successfully:
./filebeat -c config.yml -e -d "*"
----------------------------------------------------------------------
[float]
[[open-file-handlers]]
=== Too many open file handlers?
=== Too many open file handlers
{beatname_uc} keeps the file handler open in case it reaches the end of a file so that it can read new log lines in near real time. If {beatname_uc} is harvesting a large number of files, the number of open files can become an issue. In most environments, the number of files that are actively updated is low. The `close_inactive` configuration option should be set accordingly to close files that are no longer active.
@ -49,18 +47,16 @@ The `close_renamed` and `close_removed` options can be useful on Windows to reso
Make sure that you read the documentation for these configuration options before using any of them.
[float]
[[reduce-registry-size]]
=== Registry file is too large?
=== Registry file is too large
{beatname_uc} keeps the state of each file and persists the state to disk in the registry file. The file state is used to continue file reading at a previous position when {beatname_uc} is restarted. If a large number of new files are produced every day, the registry file might grow to be too large. To reduce the size of the registry file, there are two configuration options available: <<{beatname_lc}-input-log-clean-removed,`clean_removed`>> and <<{beatname_lc}-input-log-clean-inactive,`clean_inactive`>>.
For old files that you no longer touch and are ignored (see <<{beatname_lc}-input-log-ignore-older,`ignore_older`>>), we recommended that you use `clean_inactive`. If old files get removed from disk, then use the `clean_removed` option.
[float]
[[inode-reuse-issue]]
=== Inode reuse causes {beatname_uc} to skip lines?
=== Inode reuse causes {beatname_uc} to skip lines
On Linux file systems, {beatname_uc} uses the inode and device to identify files. When a file is removed from disk, the inode may be assigned to a new file. In use cases involving file rotation, if an old file is removed and a new one is created immediately afterwards, the new file may have the exact same inode as the file that was removed. In this case, {beatname_uc} assumes that the new file is the same as the old and tries to continue reading at the old position, which is not correct.
@ -68,34 +64,32 @@ By default states are never removed from the registry file. To resolve the inode
You can use <<{beatname_lc}-input-log-clean-removed,`clean_removed`>> for files that are removed from disk. Be aware that `clean_removed` cleans the file state from the registry whenever a file cannot be found during a scan. If the file shows up again later, it will be sent again from scratch.
[float]
include::filebeat-log-rotation.asciidoc[]
[[windows-file-rotation]]
=== Open file handlers cause issues with Windows file rotation?
=== Open file handlers cause issues with Windows file rotation
On Windows, you might have problems renaming or removing files because {beatname_uc} keeps the file handlers open. This can lead to issues with the file rotating system. To avoid this issue, you can use the <<{beatname_lc}-input-log-close-removed,`close_removed`>> and <<{beatname_lc}-input-log-close-renamed,`close_renamed`>> options together.
IMPORTANT: When you configure these options, files may be closed before the harvester has finished reading the files. If the file cannot be picked up again by the input and the harvester hasn't finish reading the file, the missing lines will never be sent to the output.
[float]
[[filebeat-cpu]]
=== {beatname_uc} is using too much CPU?
=== {beatname_uc} is using too much CPU
{beatname_uc} might be configured to scan for files too frequently. Check the setting for `scan_frequency` in the `filebeat.yml`
config file. Setting `scan_frequency` to less than 1s may cause {beatname_uc} to scan the disk in a tight loop.
[float]
[[dashboard-fields-incorrect-filebeat]]
=== Dashboard in Kibana is breaking up data fields incorrectly?
=== Dashboard in {kib} is breaking up data fields incorrectly
The index template might not be loaded correctly. See <<filebeat-template>>.
[float]
[[fields-not-indexed]]
=== Fields are not indexed or usable in Kibana visualizations?
=== Fields are not indexed or usable in {kib} visualizations
If you have recently performed an operation that loads or parses custom, structured logs,
you might need to refresh the index to make the fields available in Kibana. To refresh
you might need to refresh the index to make the fields available in {kib}. To refresh
the index, use the {ref}/indices-refresh.html[refresh API]. For example:
["source","sh"]
@ -103,21 +97,19 @@ the index, use the {ref}/indices-refresh.html[refresh API]. For example:
curl -XPOST 'http://localhost:9200/filebeat-2016.08.09/_refresh'
----------------------------------------------------------------------
[float]
[[newline-character-required-eof]]
=== {beatname_uc} isn't shipping the last line of a file?
=== {beatname_uc} isn't shipping the last line of a file
{beatname_uc} uses a newline character to detect the end of an event. If lines are added incrementally to a file that's being
harvested, a newline character is required after the last line, or {beatname_uc} will not read the last line of
the file.
[float]
[[faq-deleted-files-are-not-freed]]
=== {beatname_uc} keeps open file handlers of deleted files for a long time?
=== {beatname_uc} keeps open file handlers of deleted files for a long time
In the default behaviour, {beatname_uc} opens the files and keeps them open until it
reaches the end of them. In situations when the configured output is blocked
(e.g. Elasticsearch or Logstash is unavailable) for a long time, this can cause
(e.g. {es} or {ls} is unavailable) for a long time, this can cause
{beatname_uc} to keep file handlers to files that were deleted from the file system
in the mean time. As long as {beatname_uc} keeps the deleted files open, the
operating system doesn't free up the space on disk, which can lead to increase
@ -131,4 +123,5 @@ deleted before {beatname_uc} reaches the end of the file.
include::{libbeat-dir}/docs/faq-limit-bandwidth.asciidoc[]
include::{libbeat-dir}/docs/shared-faq.asciidoc[]

File diff suppressed because it is too large Load Diff

View File

@ -95,7 +95,7 @@ directory format does not already exist.
[float]
==== `config_dir`
deprecated[6.0.0, Use <<load-input-config>> instead.]
deprecated:[6.0.0, Use <<load-input-config>> instead.]
The full path to the directory that contains additional input configuration files.
Each configuration file must end with `.yml`. Each config file must also specify the full Filebeat

View File

@ -0,0 +1,98 @@
[[file-log-rotation]]
=== Log rotation results in lost or duplicate events
{beatname_uc} supports reading from rotating log files. However, some log
rotation strategies can result in lost or duplicate events when using
{beatname_uc} to forward messages. To resolve this issue:
* *Avoid log rotation strategies that copy and truncate log files*
+
Log rotation strategies that copy and truncate the input log file can result in
{beatname_uc} sending duplicate events. This happens because {beatname_uc}
identifies files by inode and device name. During log rotation, lines that
{beatname_uc} has already processed are moved to a new file. When
{beatname_uc} encounters the new file, it reads from the beginning because the
previous state information (the offset and read timestamp) is associated with the
inode and device name of the old file.
+
Furthermore, strategies that copy and truncate the input log file can result in
lost events if lines are written to the log file after it's copied, but before
it's truncated.
* *Make sure {beatname_uc} is configured to read from all rotated logs*
+
When an input log file is moved or renamed during log rotation, {beatname_uc} is
able to recognize that the file has already been read. After the file is
rotated, a new log file is created, and the application continues logging.
{beatname_uc} picks up the new file during the next scan. Because the file
has a new inode and device name, {beatname_uc} starts reading it from the
beginning.
+
To avoid missing events from a rotated file, configure the input to read from
the log file and all the rotated files. For examples, see
<<log-rotate-example>>.
If you're using Windows, also see <<log-rotation-windows>>.
[float]
[[log-rotate-example]]
==== Example configurations
This section shows a typical configuration for logrotate, a popular tool for
doing log rotation on Linux, followed by a {beatname_uc} configuration that
reads all the rotated logs.
[float]
[[log-rotate-example-logrotate]]
===== logrotate.conf
In this example, {beatname_uc} reads web server log. The logs are rotated every
day, and the new file is created with the specified permissions.
[source,yaml]
-----------------------------------------------------
/var/log/my-server/my-server.log {
daily
missingok
rotate 7
notifempty
create 0640 www-data www-data
}
-----------------------------------------------------
[float]
[[log-rotate-example-filebeat]]
===== filebeat.yml
In this example, {beatname_uc} is configured to read all log files to make
sure it does not miss any events.
[source,yaml]
-----------------------------------------------------
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/my-server/my-server.log*
-----------------------------------------------------
[float]
[[log-rotation-windows]]
==== More about log rotation on Windows
On Windows, log rotation schemes that delete old files and rename newer
files to old filenames might get blocked if the old files are being processed by
{beatname_uc}. This happens because Windows does not delete files and file
metadata until the last process has closed the file. Unlike most *nix
filesystems, a Windows filename cannot be reused until all processes accessing
the file have closed the deleted file.
To avoid this problem, use dates in rotated filenames. The file will never
be renamed to an older filename, and the log writer and log rotator will always
be able to open the file. This approach also highly reduces the chance of
log writing, rotation, and collection interfering with each other.
Because log rotation is typically handled by the logging application, we are
not providing an example configuration for Windows.
Also read <<windows-file-rotation>>.

View File

@ -136,7 +136,7 @@ to <<specify-variable-settings,specify variable settings>>.
include::./include/set-paths.asciidoc[]
[[advanced-settings]]
=== Advanced settings
=== Override input settings
Behind the scenes, each module starts a {beatname_uc} input. Advanced users
can add or override any input settings. For example, you can set

View File

@ -45,11 +45,13 @@ You can configure {beatname_uc} to use the following inputs:
* <<{beatname_lc}-input-log>>
* <<{beatname_lc}-input-stdin>>
* <<{beatname_lc}-input-container>>
* <<{beatname_lc}-input-kafka>>
* <<{beatname_lc}-input-redis>>
* <<{beatname_lc}-input-udp>>
* <<{beatname_lc}-input-docker>>
* <<{beatname_lc}-input-tcp>>
* <<{beatname_lc}-input-syslog>>
* <<{beatname_lc}-input-s3>>
* <<{beatname_lc}-input-netflow>>
* <<{beatname_lc}-input-google-pubsub>>
@ -60,6 +62,8 @@ include::inputs/input-stdin.asciidoc[]
include::inputs/input-container.asciidoc[]
include::inputs/input-kafka.asciidoc[]
include::inputs/input-redis.asciidoc[]
include::inputs/input-udp.asciidoc[]
@ -70,6 +74,8 @@ include::inputs/input-tcp.asciidoc[]
include::inputs/input-syslog.asciidoc[]
include::../../x-pack/filebeat/docs/inputs/input-aws-s3.asciidoc[]
include::../../x-pack/filebeat/docs/inputs/input-netflow.asciidoc[]
include::../../x-pack/filebeat/docs/inputs/input-google-pubsub.asciidoc[]

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 250 KiB

View File

@ -8,8 +8,8 @@ Before doing these steps, verify that {es} and {kib} are running and
that {es} is ready to receive data from {beatname_uc}.
If you're running our
https://www.elastic.co/cloud/elasticsearch-service[hosted {es} Service] on
Elastic Cloud, or you've enabled security in {es} and {kib}, you need to specify
https://www.elastic.co/cloud/elasticsearch-service[hosted {ess}] on
{ecloud}, or you've enabled security in {es} and {kib}, you need to specify
additional connection information before setting up and running the module. See
<<filebeat-modules-quickstart>> for the complete setup.

View File

@ -4,7 +4,7 @@ ifeval::["{has-dashboards}"=="true"]
.. Open your browser and navigate to the *Dashboard* overview in {kib}:
http://localhost:5601/app/kibana#/dashboards[http://localhost:5601/app/kibana#/dashboards].
Replace `localhost` with the name of the {kib} host. If you're using an
https://cloud.elastic.co/[Elastic Cloud] instance, log in to your cloud account,
https://cloud.elastic.co/[{ecloud}] instance, log in to your cloud account,
then navigate to the {kib} endpoint in your deployment.
.. If necessary, log in with your {kib} username and password.
.. Enter the module name in the search box, then open a dashboard and explore

View File

@ -1,6 +1,6 @@
= Filebeat Reference
:libbeat-dir: ../../libbeat
:libbeat-dir: {docdir}/../../libbeat
include::{libbeat-dir}/docs/version.asciidoc[]
@ -18,6 +18,7 @@ include::{asciidoc-dir}/../../shared/attributes.asciidoc[]
:has_solutions:
:ignores_max_retries:
:has_docker_label_ex:
:has_decode_cef_processor:
:has_decode_csv_fields_processor:
:has_script_processor:
:has_timestamp_processor:

View File

@ -7,7 +7,7 @@
<titleabbrev>Docker</titleabbrev>
++++
deprecated[7.2.0, Use `container` input instead.]
deprecated:[7.2.0, Use `container` input instead.]
Use the `docker` input to read logs from Docker containers.

View File

@ -0,0 +1,126 @@
:type: kafka
[id="{beatname_lc}-input-{type}"]
=== Kafka input
++++
<titleabbrev>Kafka</titleabbrev>
++++
Use the `kafka` input to read from topics in a Kafka cluster.
To configure this input, specify a list of one or more <<kafka-hosts,`hosts`>> in the
cluster to bootstrap the connection with, a list of <<topics,`topics`>> to
track, and a <<groupid,`group_id`>> for the connection.
Example configuration:
["source","yaml",subs="attributes"]
----
{beatname_lc}.inputs:
- type: kafka
hosts:
- kafka-broker-1:9092
- kafka-broker-2:9092
topics: ["my-topic"]
group_id: "filebeat"
----
[id="{beatname_lc}-input-{type}-options"]
==== Configuration options
The `kafka` input supports the following configuration options plus the
<<{beatname_lc}-input-{type}-common-options>> described later.
[float]
[[kafka-hosts]]
===== `hosts`
A list of Kafka bootstrapping hosts (brokers) for this cluster.
[float]
[[topics]]
===== `topics`
A list of topics to read from.
[float]
[[groupid]]
===== `group_id`
The Kafka consumer group id.
[float]
===== `client_id`
The Kafka client id (optional).
[float]
===== `version`
The version of the Kafka protocol to use (defaults to `"1.0.0"`).
[float]
===== `initial_offset`
The initial offset to start reading, either "oldest" or "newest". Defaults to
"oldest".
===== `connect_backoff`
How long to wait before trying to reconnect to the kafka cluster after a
fatal error. Default is 30s.
===== `consume_backoff`
How long to wait before retrying a failed read. Default is 2s.
===== `max_wait_time`
How long to wait for the minimum number of input bytes while reading. Default
is 250ms.
===== `wait_close`
When shutting down, how long to wait for in-flight messages to be delivered
and acknowledged.
===== `isolation_level`
This configures the Kafka group isolation level:
- `"read_uncommitted"` returns _all_ messages in the message channel.
- `"read_committed"` hides messages that are part of an aborted transaction.
The default is `"read_uncommitted"`.
===== `fetch`
Kafka fetch settings:
*`min`*:: The minimum number of bytes to wait for. Defaults to 1.
*`default`*:: The default number of bytes to read per request. Defaults to 1MB.
*`max`*:: The maximum number of bytes to read per request. Defaults to 0
(no limit).
===== `rebalance`
Kafka rebalance settings:
*`strategy`*:: Either `"range"` or `"roundrobin"`. Defaults to `"range"`.
*`timeout`*:: How long to wait for an attempted rebalance. Defaults to 60s.
*`max_retries`*:: How many times to retry if rebalancing fails. Defaults to 4.
*`retry_backoff`*:: How long to wait after an unsuccessful rebalance attempt.
Defaults to 2s.
[id="{beatname_lc}-input-{type}-common-options"]
include::../inputs/input-common-options.asciidoc[]
:type!:

View File

@ -57,10 +57,14 @@ multiple input sections:
IMPORTANT: Make sure a file is not defined more than once across all inputs
because this can lead to unexpected behaviour.
NOTE: When dealing with file rotation, avoid harvesting symlinks. Instead
[[rotating-logs]]
==== Reading from rotating logs
When dealing with file rotation, avoid harvesting symlinks. Instead
use the <<input-paths>> setting to point to the original file, and specify
a pattern that matches the file you want to harvest and all of its rotated
files.
files. Also make sure your log rotation strategy prevents lost or duplicate
messages. For more information, see <<file-log-rotation>>.
[id="{beatname_lc}-input-{type}-options"]
==== Configuration options

View File

@ -18,7 +18,7 @@ Example configuration:
{beatname_lc}.inputs:
- type: redis
hosts: ["localhost:6379"]
password: "$\{redis_pwd\}"
password: "${redis_pwd}"
----

View File

@ -9,6 +9,4 @@ modules.
Filebeat modules require Elasticsearch 5.2 or later.
//pass macro block used here to remove Edit links from modules documentation because it is generated
pass::[<?edit_url?>]
include::modules_list.asciidoc[]

View File

@ -71,6 +71,8 @@ include::../include/var-paths.asciidoc[]
include::../include/var-paths.asciidoc[]
include::../include/timezone-support.asciidoc[]
:has-dashboards!:
:fileset_ex!:

View File

@ -0,0 +1,56 @@
////
This file is generated! See scripts/docs_collector.py
////
[[filebeat-module-aws]]
[role="xpack"]
:modulename: aws
:has-dashboards: true
== AWS module
beta[]
This is a module for aws logs. It uses filebeat s3 input to get log files from
AWS S3 buckets with SQS notification. This module supports reading s3 server
access logs with `s3access` fileset. Server access logging provides detailed
records for the requests that are made to a bucket.
[float]
=== Example dashboard
This module comes with a sample dashboard for `s3access` fileset:
[role="screenshot"]
image::./images/filebeat-aws-s3access-overview.png[]
[float]
==== `s3access` fileset settings
Example config:
[source,yaml]
----
- module: aws
s3access:
enabled: true
var.queue_url: https://sqs.us-west-1.amazonaws.com/123/queue-name
var.credential_profile_name: fb-aws
----
*`var.queue_url`*::
AWS SQS queue url.
*`var.credential_profile_name`*::
AWS credential profile name.
[float]
=== Fields
For a description of each field in the module, see the
<<exported-fields-aws,exported fields>> section.

View File

@ -0,0 +1,55 @@
////
This file is generated! See scripts/docs_collector.py
////
[[filebeat-module-cef]]
[role="xpack"]
:modulename: cef
:has-dashboards: false
== CEF module
This is a module for receiving Common Event Format (CEF) data over Syslog. When
messages are received over the syslog protocol the syslog input will parse the
header and set the timestamp value. Then the
<<processor-decode-cef, `decode_cef`>> processor is applied to parse the CEF
encoded data. The decoded data is written into a `cef` object field. Lastly any
Elastic Common Schema (ECS) fields that can be populated with the CEF data are
populated.
include::../include/running-modules.asciidoc[]
include::../include/configuring-intro.asciidoc[]
:fileset_ex: log
include::../include/config-option-intro.asciidoc[]
[float]
==== `log` fileset settings
*`var.syslog_host`*::
The interface to listen to UDP based syslog traffic. Defaults to `localhost`.
Set to `0.0.0.0` to bind to all available interfaces.
*`var.syslog_port`*::
The UDP port to listen for syslog traffic. Defaults to `9003`
NOTE: Ports below 1024 require Filebeat to run as root.
:has-dashboards!:
:fileset_ex!:
:modulename!:
[float]
=== Fields
For a description of each field in the module, see the
<<exported-fields-cef,exported fields>> section.

View File

@ -12,10 +12,12 @@ This file is generated! See scripts/docs_collector.py
beta[]
This is a module for Cisco network device's logs. The `asa` fileset supports
Cisco ASA firewall logs received over syslog or read from a file. And the `ios`
fileset supports Cisco IOS router and switch logs received over syslog or read
from a file.
This is a module for Cisco network device's logs. It includes the following
filesets for receiving logs over syslog or read from a file:
- `asa` fileset: supports Cisco ASA firewall logs.
- `ftd` fileset: supports Cisco Firepower Threat Defense logs.
- `ios` fileset: supports Cisco IOS router and switch logs.
Cisco ASA devices also support exporting flow records using NetFlow, which is
supported by the {filebeat-ref}/filebeat-module-netflow.html[netflow module] in
@ -103,6 +105,148 @@ The UDP port to listen for syslog traffic. Defaults to 9001.
:fileset_ex!:
[float]
==== `ftd` fileset settings
The Cisco FTD fileset primarily supports parsing IPv4 and IPv6 access list log
messages similar to that of ASA devices as well as Security Event Syslog
Messages for Intrusion, Connection, File and Malware events.
*Field mappings*
The `ftd` fileset maps Security Event Syslog Messages to the Elastic Common
Schema (ECS) format. The following table illustrates the mapping from
Security Event fields to ECS. The `cisco.ftd` prefix is used when there is no
corresponding ECS field available.
Mappings for Intrusion events fields:
[options="header"]
|====================================
| FTD Field | Mapped fields
| ApplicationProtocol | network.protocol
| DstIP | destination.address
| DstPort | destination.port
| EgressInterface | cisco.ftd.destination_interface
| GID | service.id
| HTTPResponse | http.response.status_code
| IngressInterface | cisco.ftd.source_interface
| InlineResult | event.outcome
| IntrusionPolicy | cisco.ftd.rule_name
| Message | message
| Protocol | network.transport
| SrcIP | source.address
| SrcPort | source.port
| User | user.id, user.name
| WebApplication | network.application
|====================================
Mappings for Connection and Security Intelligence events fields:
[options="header"]
|====================================
| FTD Field | Mapped fields
| ACPolicy | cisco.ftd.rule_name
| AccessControlRuleAction | event.outcome
| AccessControlRuleName | cisco.ftd.rule_name
| ApplicationProtocol | network.protocol
| ConnectionDuration | event.duration
| DNSQuery | dns.question.name
| DNSRecordType | dns.question.type
| DNSResponseType | dns.response_code
| DstIP | destination.address
| DstPort | destination.port
| EgressInterface | cisco.ftd.destination_interface
| HTTPReferer | http.request.referrer
| HTTPResponse | http.response.status_code
| IngressInterface | cisco.ftd.source_interface
| InitiatorBytes | source.bytes
| InitiatorPackets | source.packets
| NetBIOSDomain | host.hostname
| Protocol | network.transport
| ReferencedHost | url.domain
| ResponderBytes | destination.bytes
| ResponderPackets | destination.packets
| SSLActualAction | event.outcome
| SSLServerName | server.domain
| SrcIP | source.address
| SrcPort | source.port
| URL | url.original
| User | user.name
| UserAgent | user_agent.original
| WebApplication | network.application
| originalClientSrcIP | client.address
|====================================
Mappings for File and Malware events fields:
[options="header"]
|====================================
| FTD Field | Mapped fields
| ApplicationProtocol | network.protocol
| ArchiveFileName | file.name
| ArchiveSHA256 | file.hash.sha256
| Client | network.application
| DstIP | destination.address
| DstPort | destination.port
| FileName | file.name
| FilePolicy | cisco.ftd.rule_name
| FileSHA256 | file.hash.sha256
| FileSize | file.size
| FirstPacketSecond | event.start
| Protocol | network.transport
| SrcIP | source.address
| SrcPort | source.port
| URI | url.original
| User | user.name
| WebApplication | network.application
|====================================
*Example configuration:*
[source,yaml]
----
- module: cisco
ftd:
var.syslog_host: 0.0.0.0
var.syslog_port: 9003
var.log_level: 5
----
include::../include/var-paths.asciidoc[]
*`var.log_level`*::
An integer between 1 and 7 that allows to filter messages based on the
severity level. The different severity levels supported by the Cisco ASA are:
[width="30%",cols="^1,2",options="header"]
|===========================
| log_level | severity
| 1 | Alert
| 2 | Critical
| 3 | Error
| 4 | Warning
| 5 | Notification
| 6 | Informational
| 7 | Debugging
|===========================
A value of 7 (default) will not filter any messages. A lower value will drop
any messages with a severity level higher than the specified value. For
example, `var.log_level: 3` will allow messages of level 1 (Alert), 2 (Critical)
and 3 (Error). All other messages will be dropped.
*`var.syslog_host`*::
The interface to listen to UDP based syslog traffic. Defaults to localhost.
Set to 0.0.0.0 to bind to all available interfaces.
*`var.syslog_port`*::
The UDP port to listen for syslog traffic. Defaults to 9003.
:has-dashboards!:
:fileset_ex!:
[float]
==== `ios` fileset settings
@ -130,6 +274,8 @@ Set to 0.0.0.0 to bind to all available interfaces.
The UDP port to listen for syslog traffic. Defaults to 9002.
include::../include/timezone-support.asciidoc[]
:has-dashboards!:
:fileset_ex!:

View File

@ -123,6 +123,8 @@ NOTE: If you're running against Elasticsearch >= 7.0.0, configure the
`var.paths` setting to point to JSON logs. Otherwise, configure it
to point to plain text logs.
include::../include/timezone-support.asciidoc[]
:has-dashboards!:
:fileset_ex!:

View File

@ -0,0 +1,66 @@
////
This file is generated! See scripts/docs_collector.py
////
[[filebeat-module-ibmmq]]
[role="xpack"]
:modulename: ibmmq
== IBM MQ module
beta[]
The `ibmmq` module collects and parses the queue manager error logs from IBM MQ in the standard format.
include::../include/what-happens.asciidoc[]
[float]
=== Compatibility
This module has been tested with IBM MQ v9.1.0.0, but it should be compatible with older versions.
include::../include/running-modules.asciidoc[]
[float]
=== Example dashboard
This module comes with a sample dashboard. For example:
[role="screenshot"]
image::./images/filebeat-ibmmq.png[]
include::../include/configuring-intro.asciidoc[]
The following example shows how to set paths in the +modules.d/{modulename}.yml+
file to override the default paths for IBM MQ errorlog:
["source","yaml",subs="attributes"]
-----
- module: ibmmq
errorlog:
enabled: true
var.paths: ["C:/ibmmq/logs/*.log"]
-----
:fileset_ex: errorlog
include::../include/config-option-intro.asciidoc[]
[float]
==== `errorlog` fileset settings
include::../include/var-paths.asciidoc[]
:fileset_ex!:
:modulename!:
[float]
=== Fields
For a description of each field in the module, see the
<<exported-fields-ibmmq,exported fields>> section.

View File

@ -75,6 +75,8 @@ The UDP port to listen for syslog traffic. Defaults to `9001`
NOTE: Ports below 1024 require Filebeat to run as root.
include::../include/timezone-support.asciidoc[]
:has-dashboards!:
:fileset_ex!:

View File

@ -94,6 +94,8 @@ include::../include/var-paths.asciidoc[]
The configured Logstash log format. Possible values are: `json` or `plain`. The
default is `plain`.
include::../include/timezone-support.asciidoc[]
:has-dashboards!:
:fileset_ex!:

View File

@ -48,6 +48,8 @@ include::../include/config-option-intro.asciidoc[]
include::../include/var-paths.asciidoc[]
include::../include/timezone-support.asciidoc[]
:has-dashboards!:
:fileset_ex!:

View File

@ -74,6 +74,8 @@ include::../include/var-paths.asciidoc[]
include::../include/var-paths.asciidoc[]
include::../include/timezone-support.asciidoc[]
:has-dashboards!:
:fileset_ex!:

View File

@ -57,6 +57,8 @@ To specify the same settings at the command line, you use:
-M "osquery.result.var.paths=[/path/to/osqueryd.results.log*]"
-----
//set the fileset name used in the included example
:fileset_ex: result
include::../include/config-option-intro.asciidoc[]
[float]

Some files were not shown because too many files have changed in this diff Show More