helm3安装了logstash配置好了logback,但是日志记录一直不对,如何解决?-灵析社区

北北che

logstash配置: global: storageClass: alibabacloud-cnfs-nas service: type: NodePort ports: - name: http port: 8080 targetPort: http protocol: TCP - name: syslog-udp port: 1514 targetPort: syslog-udp protocol: UDP - name: syslog-tcp port: 1514 targetPort: syslog-tcp protocol: TCP persistence: # 云盘 # storageClass: "alicloud-disk-ssd" # size: 20Gi # NAS storageClass: alibabacloud-cnfs-nas size: 2Gi input: |- udp { port => 1514 codec => json_lines } tcp { port => 1514 codec => json_lines } http { port => 8080 } filter: |- json { source => "message" target => "json" } output: |- if [env] != "" { elasticsearch { hosts => ["xxx.xxx.xxx.xxx:xxxx"] index => "logs33--success-%{+YYYY.MM.dd}" } } else { elasticsearch { hosts => ["xxx.xxx.xxx.xxx:xxxx"] index => "logs-failure-%{+YYYY.MM.dd}" } } stdout { codec => rubydebug } logback配置 --> xxx.xxx.xxx.xxx:xxx {"env":"dev"} UTC { "serviceName": "${name}", "level": "%level", "message": "%message", "env": "test", "stack_trace": "%exception{5}", "pid": "${PID:-}", "thread": "%thread", "class": "%logger{40}" } ERROR>WARN>INFO>DEBUG>TRACE>ALL --> --> INFO--> --> logstash打印的日志: [2023-09-22T02:26:50,029][INFO ][logstash.codecs.json ][main][f3916e23ca79e9308acd3be143501936b256d568e41e841a6fd83f731839d2c0] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message) { "event" => { "original" => "" }, "json" => nil, "message" => "", "host" => { "ip" => "10.0.125.0" }, "url" => { "path" => "/bad-request" }, "http" => { "version" => "HTTP/1.0", "method" => "GET" }, "@timestamp" => 2023-09-22T02:26:50.030993835Z, "@version" => "1" } 从打印的日志看,输出的格式明显有问题。

阅读量:356

点赞量:13

问AI
自己解决了,使用下面的logstash配置就ok了 global: storageClass: alibabacloud-cnfs-nas service: type: NodePort ports: - name: http port: 8080 targetPort: http protocol: TCP - name: syslog-udp port: 1514 targetPort: syslog-udp protocol: UDP - name: syslog-tcp port: 1514 targetPort: syslog-tcp protocol: TCP persistence: enabled: true # NAS storageClass: alibabacloud-cnfs-nas size: 2Gi containerPorts: - name: http containerPort: 8080 protocol: TCP - name: monitoring containerPort: 9600 protocol: TCP - name: syslog-udp containerPort: 1514 protocol: UDP - name: syslog-tcp containerPort: 1514 protocol: TCP input: |- udp { port => 1514 type => syslog codec => json_lines } tcp { port => 1514 type => syslog codec => json_lines } http { port => 8080 } output: |- if [active] != "" { elasticsearch { hosts => ["xxx.xxx.xxx.xxx:xxxx"] index => "%{active}-logs-%{+YYYY.MM.dd}" } } else { elasticsearch { hosts => ["xxx.xxx.xxx.xxx:xxxx"] index => "ignore-logs-%{+YYYY.MM.dd}" } } stdout { } 我讲下我的解决过程: 1、刚开始以为是logstash的问题,但是发现使用curl测试发送消息是ok的 ➜ ~ kubectl port-forward service/logstash 8080:8080 -nlogstash ➜ ~ curl -X POST -d '{"message": "Hello World","env": "dev"}' http://localhost:8080 2、既然logstash没问题我就看看是不是logback有问题,发现不管怎么配置都不行 3、我就打算换个思路既然curl发送日志可以logback不行,我就想抓包试试logback发送的日志请求报文,于是我查看logback配置的时候我发现使用的是"net.logstash.logback.appender.LogstashTcpSocketAppender"类,再加上我使用的是curl的http请求,我于是推导出可能我的logstash的tcp端口可能不对,于是又回到logstash配置上面 4、最后修改logstash配置让tcp端口ping通才真正解决问题,所以问题就是tcp端口不通导致的,使用下面命令测试 telnet xxx.xxx.xxx.xxx xxxx 5、总结一下就是对logstash不太熟悉导致的,不知道logback是通过tcp发送的请求到logstash,自己还一直处在curl没有问题的状态中,好在最后发现了问题所在。