日志是supervisor打出来的python日志,且把不同格式的日志打印到了同一批文件里,需求是把带post和ERROR关键字的日志分离,并进入两个不同kafka的topic队列,目前的情况是rsyslog已经收集了nginx的访问日志,不能相互影响,就是说不能直接用if判断做分离,
日志是supervisor打出来的python日志,且把不同格式的日志打印到了同一批文件里,需求是把带post和ERROR关键字的日志分离,并进入两个不同kafka的topic队列,目前的情况是rsyslog已经收集了nginx的访问日志,不能相互影响,就是说不能直接用if判断做分离,因为可能会日志混掉。要收集的日志格式如下:
ERROR:root:requeue {"withRefresh": false, "localPath": "/data1/ms/cache/file_store_location/n.fdaimg.cn/translate/20170219/oobE-fyarref6029227.jpg?43", "remotePath": "translate/20170219/oobE-fyarref6029227.jpg?43"} INFO:root:2017-02-22T11:53:11.395165, {"withRefresh": false, "localPath": "/data1/ms/cache/file_store_location/n.adfaimg.cn/w/20170222/aue--fyarref6523250.jpeg", "remotePath": "w/20170222/aue--fyarref6523250.jpeg"} INFO:root:post /data1/ms/cache/file_store_location/n.fsdaimg.cn/w/20170222/aue--fyarref6523250.jpeg to w/20170222/aue--fyarref6523250.jpeg took 112.954854965 ms...
操作做之前配置的rsyslog的规则如下:
module(load="imfile") module(load="omkafka") $PreserveFQDN on main_queue( queue.workerthreads="10" # threads to work on the queue queue.dequeueBatchSize="1000" # max number of messages to process at once queue.size="50000" # max queue size ) ######################### nginx access ##################### $template nginxlog,"xd172\.16\.11\.44`%msg%" ruleset(name="nginxlog") { action( broker=["10.13.88.190:9092","10.13.88.191:9092","10.13.88.192:9092","10.13.88.193:9092"] type="omkafka" topic="cms-nimg-s3" template="nginxlog" partitions.auto="on" ) } input(type="imfile" File="/data1/ms/comos/logs/access_s3.log" Tag="" ruleset="nginxlog" freshStartTail="on" reopenOnTruncate="on" )
当时想直接用if判断做分离,后来发现所有的日志都会进if判断,完全可能把日志混淆,后来测试发现,ruleset里竟然可以嵌套if判断,神奇的rsyslog,解决了一个大问题,配置如下:
module(load="imfile") module(load="omkafka") $PreserveFQDN on main_queue( queue.workerthreads="10" # threads to work on the queue queue.dequeueBatchSize="1000" # max number of messages to process at once queue.size="50000" # max queue size ) ######################### nginx access ##################### $template nginxlog,"xd172\.16\.11\.44`%msg%" ruleset(name="nginxlog") { action( broker=["10.13.88.190:9092","10.13.88.191:9092","10.13.88.192:9092","10.13.88.193:9092"] type="omkafka" topic="cms-nimg-s3" template="nginxlog" partitions.auto="on" ) } input(type="imfile" File="/data1/ms/comos/logs/access_s3.log" Tag="" ruleset="nginxlog" freshStartTail="on" reopenOnTruncate="on" ) ####################### python s3 post error################################ $template s3post,"xd172\.16\.11\.43 %msg%" ruleset(name="s3post") { if ( $msg contains "post" ) then { action( broker=["10.13.88.190:9092","10.13.88.191:9092","10.13.88.192:9092","10.13.88.193:9092"] type="omkafka" topic="cms-nimg-s3-post" template="s3post" partitions.auto="on" ) } if ( $msg contains "ERROR" ) then { action( broker=["10.13.88.190:9092","10.13.88.191:9092","10.13.88.192:9092","10.13.88.193:9092"] type="omkafka" topic="cms-nimg-s3-post-error" template="s3post" partitions.auto="on" ) } } input(type="imfile" File="/data1/ms/comos/logs/s3q_daemon_0.err" Tag="" ruleset="s3post" freshStartTail="on" reopenOnTruncate="on" ) input(type="imfile" File="/data1/ms/comos/logs/s3q_daemon_1.err" Tag="" ruleset="s3post" freshStartTail="on" reopenOnTruncate="on" ) input(type="imfile" File="/data1/ms/comos/logs/s3q_daemon_2.err" Tag="" ruleset="s3post" freshStartTail="on" reopenOnTruncate="on" )
“运维网咖社”原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 、作者信息和本声明。否则将追究法律责任。http://www.net-add.com
©本站文章(技术文章和tank手记)均为社长"矢量比特"工作.实践.学习中的心得原创或手记,请勿转载!
欢迎扫描关注微信公众号【运维网咖社】
社长"矢量比特",曾就职中软、新浪,现任职小米,致力于DevOps运维体系的探索和运维技术的研究实践. |