filebeat+kafka,drop_fields。自定义传入kafka里的字段
匿名 | 发布于2017年11月28日 | 阅读数:6794
filebeat将日志信息传入kafka,消费kafka的时候可以看到这样的结果
{
"@timestamp" : "2017-11-27T07:48:10.967Z",
"@metadata" : {"beat":"filebeat","type":"doc","version":"6.0.0","topic":"beats"},
"source" : "/var/log/secure","offset":13588,
"message" : "Nov 27 15:48:04 host-127-0-0-1 sshd[11654]: Connection closed by 0.0.0.0",
"prospector" : {"type":"log"},
"beat" : {"name":"host-127-0-0-1","hostname":"host-127-0-0-1","version":"6.0.0"}
}
我只想让日志信息"Nov 27 15:48:04 host-127-0-0-1 sshd[11654]: Connection closed by 0.0.0.0"直接传入kafka,为什么附加了那么多别的值???
我用drop_fields删字段,别的都可以删,@timestamp,@metadata这两个字段始终删除不了。。。求解?
{
"@timestamp" : "2017-11-27T07:48:10.967Z",
"@metadata" : {"beat":"filebeat","type":"doc","version":"6.0.0","topic":"beats"},
"source" : "/var/log/secure","offset":13588,
"message" : "Nov 27 15:48:04 host-127-0-0-1 sshd[11654]: Connection closed by 0.0.0.0",
"prospector" : {"type":"log"},
"beat" : {"name":"host-127-0-0-1","hostname":"host-127-0-0-1","version":"6.0.0"}
}
我只想让日志信息"Nov 27 15:48:04 host-127-0-0-1 sshd[11654]: Connection closed by 0.0.0.0"直接传入kafka,为什么附加了那么多别的值???
我用drop_fields删字段,别的都可以删,@timestamp,@metadata这两个字段始终删除不了。。。求解?
3 个回复
匿名用户
赞同来自:
Nov 27 15:48:04 host-127-0-0-1 sshd[11654]: Connection closed by 0.0.0.0
抓取的是/var/log/secure里的日志
yushun
赞同来自:
Filebeat使用@metadata字段将元数据发送到Logstash/Kafka
有关@metadata字段的更多信息,请参阅Logstash文档
https://www.elastic.co/guide/e ... adata
转载自 http://www.jianshu.com/p/fb0ac96b85d7
qvitt
赞同来自: