DevOps – ELK – The build step of filebeat

filebeat 用來在各 ap 主機收集指定路徑的紀錄檔,再發送到 logstash。

因收集來源有各自的多行 event 區隔的設置,所以區分

  • fusion_log
    • from:custom java logback
    • output to logstash:5000 port
  • tomcat_log
    • from:catalina.log、localhost.log
    • output to logstash:5001 port
  • payara_log
    • from:server.log
    • output to logstash:5002 port

為各自 filebeat 服務。 (fusion_log 是公司自定義格式的 log)

以下是建置步驟:

  1. 新增各 filebeat-registry 資料夾 (供mark用),並於 home 目錄下直接新增配置檔案:
    • fusion_log
      $ mkdir -p /home/fusion-ap/fusion-filebeat-registry
      $ vi ~/fusion-filebeat.yml
      
    • fusion-filebeat.yml 內容
      ################### Filebeat Configuration Example #########################
      
      ############################# Filebeat ######################################
      filebeat:
        # List of prospectors to fetch data.
        prospectors:
          # Each - is a prospector. Below are the prospector specific configurations
          #-
            # Paths that should be crawled and fetched. Glob based paths.
            # To fetch all ".log" files from a specific level of subdirectories
            # /var/log/*/*.log can be used.
            # For each file found under this path, a harvester is started.
            # Make sure not file is defined twice as this can lead to unexpected behaviour.
            #paths:
              #- c:\programdata\elasticsearch\logs\*
      
            # Configure the file encoding for reading files with international characters
            # following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
            # Some sample encodings:
            #   plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
            #    hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
            #encoding: plain
      
            # Type of the files. Based on this the way the file is read is decided.
            # The different types cannot be mixed in one prospector
            #
            # Possible options are:
            # * log: Reads every line of the log file (default)
            # * stdin: Reads the standard in
            #input_type: log
      
            # Exclude lines. A list of regular expressions to match. It drops the lines that are
            # matching any regular expression from the list. The include_lines is called before
            # exclude_lines. By default, no lines are dropped.
            # exclude_lines: ["^DBG"]
      
            # Include lines. A list of regular expressions to match. It exports the lines that are
            # matching any regular expression from the list. The include_lines is called before
            # exclude_lines. By default, all the lines are exported.
            # include_lines: ["^ERR", "^WARN"]
      
            # Exclude files. A list of regular expressions to match. Filebeat drops the files that
            # are matching any regular expression from the list. By default, no files are dropped.
            # exclude_files: [".gz$"]
      
            # Optional additional fields. These field can be freely picked
            # to add additional information to the crawled log files for filtering
            #fields:
            #  level: debug
            #  review: 1
      
            # Set to true to store the additional fields as top level fields instead
            # of under the "fields" sub-dictionary. In case of name conflicts with the
            # fields added by Filebeat itself, the custom fields overwrite the default
            # fields.
            #fields_under_root: false
      
            # Ignore files which were modified more then the defined timespan in the past.
            # In case all files on your system must be read you can set this value very large.
            # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
            #ignore_older: 0
      
            # Close older closes the file handler for which were not modified
            # for longer then close_older
            # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
            #close_older: 1h
      
            # Type to be published in the 'type' field. For Elasticsearch output,
            # the type defines the document type these entries should be stored
            # in. Default: log
            #document_type: log
      
            # Scan frequency in seconds.
            # How often these files should be checked for changes. In case it is set
            # to 0s, it is done as often as possible. Default: 10s
            #scan_frequency: 10s
      
            # Defines the buffer size every harvester uses when fetching the file
            #harvester_buffer_size: 16384
      
            # Maximum number of bytes a single log event can have
            # All bytes after max_bytes are discarded and not sent. The default is 10MB.
            # This is especially useful for multiline log messages which can get large.
            #max_bytes: 10485760
      
            # Mutiline can be used for log messages spanning multiple lines. This is common
            # for Java Stack Traces or C-Line Continuation
            #multiline:
      
              # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
              #pattern: ^\[
      
              # Defines if the pattern set under pattern should be negated or not. Default is false.
              #negate: false
      
              # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
              # that was (not) matched before or after or as long as a pattern is not matched based on negate.
              # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
              #match: after
      
              # The maximum number of lines that are combined to one event.
              # In case there are more the max_lines the additional lines are discarded.
              # Default is 500
              #max_lines: 500
      
              # After the defined timeout, an multiline event is sent even if no new pattern was found to start a new event
              # Default is 5s.
              #timeout: 5s
      
            # Setting tail_files to true means filebeat starts readding new files at the end
            # instead of the beginning. If this is used in combination with log rotation
            # this can mean that the first entries of a new file are skipped.
            #tail_files: false
      
            # Backoff values define how agressively filebeat crawls new files for updates
            # The default values can be used in most cases. Backoff defines how long it is waited
            # to check a file again after EOF is reached. Default is 1s which means the file
            # is checked every second if new lines were added. This leads to a near real time crawling.
            # Every time a new line appears, backoff is reset to the initial value.
            #backoff: 1s
      
            # Max backoff defines what the maximum backoff time is. After having backed off multiple times
            # from checking the files, the waiting time will never exceed max_backoff idenependent of the
            # backoff factor. Having it set to 10s means in the worst case a new line can be added to a log
            # file after having backed off multiple times, it takes a maximum of 10s to read the new line
            #max_backoff: 10s
      
            # The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
            # the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
            # The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
            #backoff_factor: 2
      
            # This option closes a file, as soon as the file name changes.
            # This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
            # issues when the file is removed, as the file will not be fully removed until also Filebeat closes
            # the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
            # same name can be created. Turning this feature on the other hand can lead to loss of data
            # on rotate files. It can happen that after file rotation the beginning of the new
            # file is skipped, as the reading starts at the end. We recommend to leave this option on false
            # but lower the ignore_older value to release files faster.
            #force_close_files: false
      
          # fusion log
          -
            paths:
             - /srv/fusion-ap/*/logs/logback/*/*/*
            document_type: fusion_log
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          # Additional prospector
          #-
            # Configuration to use stdin input
            #input_type: stdin
      
        # General filebeat configuration options
        #
        # Event count spool threshold - forces network flush if exceeded
        spool_size: 1024
      
        # Enable async publisher pipeline in filebeat (Experimental!)
        #publish_async: false
      
        # Defines how often the spooler is flushed. After idle_timeout the spooler is
        # Flush even though spool_size is not reached.
        idle_timeout: 5s
      
        # Name of the registry file. Per default it is put in the current working
        # directory. In case the working directory is changed after when running
        # filebeat again, indexing starts from the beginning again.
        registry_file: /etc/registry/mark
      
        # Full Path to directory with additional prospector configuration files. Each file must end with .yml
        # These config files must have the full filebeat config part inside, but only
        # the prospector part is processed. All global options like spool_size are ignored.
        # The config_dir MUST point to a different directory then where the main filebeat config file is in.
        #config_dir:
      
      ###############################################################################
      ############################# Libbeat Config ##################################
      # Base config file used by all other beats for using libbeat features
      
      ############################# Output ##########################################
      
      # Configure what outputs to use when sending the data collected by the beat.
      # Multiple outputs may be used.
      output:
      
        ### Elasticsearch as output
        #elasticsearch:
          # Array of hosts to connect to.
          # Scheme and port can be left out and will be set to the default (http and 9200)
          # In case you specify and additional path, the scheme is required: http://localhost:9200/path
          # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
          #hosts: ["192.168.159.95:9200"]
      
          # Optional protocol and basic auth credentials.
          #protocol: "https"
          #username: "admin"
          #password: "s3cr3t"
      
          # Number of workers per Elasticsearch host.
          #worker: 1
      
          # Optional index name. The default is "filebeat" and generates
          # [filebeat-]YYYY.MM.DD keys.
          #index: "filebeat"
      
          # A template is used to set the mapping in Elasticsearch
          # By default template loading is disabled and no template is loaded.
          # These settings can be adjusted to load your own template or overwrite existing ones
          #template:
      
            # Template name. By default the template name is filebeat.
            #name: "filebeat"
      
            # Path to template file
            #path: "filebeat.template.json"
      
            # Overwrite existing template
            #overwrite: false
      
          # Optional HTTP Path
          #path: "/elasticsearch"
      
          # Proxy server url
          #proxy_url: http://proxy:3128
      
          # The number of times a particular Elasticsearch index operation is attempted. If
          # the indexing operation doesn't succeed after this many retries, the events are
          # dropped. The default is 3.
          #max_retries: 3
      
          # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
          # The default is 50.
          #bulk_max_size: 50
      
          # Configure http request timeout before failing an request to Elasticsearch.
          #timeout: 90
      
          # The number of seconds to wait for new events between two bulk API index requests.
          # If `bulk_max_size` is reached before this interval expires, addition bulk index
          # requests are made.
          #flush_interval: 1
      
          # Boolean that sets if the topology is kept in Elasticsearch. The default is
          # false. This option makes sense only for Packetbeat.
          #save_topology: false
      
          # The time to live in seconds for the topology information that is stored in
          # Elasticsearch. The default is 15 seconds.
          #topology_expire: 15
      
          # tls configuration. By default is off.
          #tls:
            # List of root certificates for HTTPS server verifications
            #certificate_authorities: ["/etc/pki/root/ca.pem"]
      
            # Certificate for TLS client authentication
            #certificate: "/etc/pki/client/cert.pem"
      
            # Client Certificate Key
            #certificate_key: "/etc/pki/client/cert.key"
      
            # Controls whether the client verifies server certificates and host name.
            # If insecure is set to true, all server host names and certificates will be
            # accepted. In this mode TLS based connections are susceptible to
            # man-in-the-middle attacks. Use only for testing.
            #insecure: true
      
            # Configure cipher suites to be used for TLS connections
            #cipher_suites: []
      
            # Configure curve types for ECDHE based cipher suites
            #curve_types: []
      
            # Configure minimum TLS version allowed for connection to logstash
            #min_version: 1.0
      
            # Configure maximum TLS version allowed for connection to logstash
            #max_version: 1.2
      
      
        ### Logstash as output
        logstash:
          # The Logstash hosts
          hosts: ["192.168.159.95:5000"]
      
          # Number of workers per Logstash host.
          #worker: 1
      
          # Set gzip compression level.
          #compression_level: 3
      
          # Optional load balance the events between the Logstash hosts
          #loadbalance: true
      
          # Optional index name. The default index name depends on the each beat.
          # For Packetbeat, the default is set to packetbeat, for Topbeat
          # top topbeat and for Filebeat to filebeat.
          index: filebeat
      
          # Optional TLS. By default is off.
          #tls:
            # List of root certificates for HTTPS server verifications
            #certificate_authorities: ["/etc/pki/root/ca.pem"]
      
            # Certificate for TLS client authentication
            #certificate: "/etc/pki/client/cert.pem"
      
            # Client Certificate Key
            #certificate_key: "/etc/pki/client/cert.key"
      
            # Controls whether the client verifies server certificates and host name.
            # If insecure is set to true, all server host names and certificates will be
            # accepted. In this mode TLS based connections are susceptible to
            # man-in-the-middle attacks. Use only for testing.
            #insecure: true
      
            # Configure cipher suites to be used for TLS connections
            #cipher_suites: []
      
            # Configure curve types for ECDHE based cipher suites
            #curve_types: []
      
      
        ### File as output
        #file:
          # Path to the directory where to save the generated files. The option is mandatory.
          #path: "/tmp/filebeat"
      
          # Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
          #filename: filebeat
      
          # Maximum size in kilobytes of each file. When this size is reached, the files are
          # rotated. The default value is 10 MB.
          #rotate_every_kb: 10000
      
          # Maximum number of files under path. When this number of files is reached, the
          # oldest file is deleted and the rest are shifted from last to first. The default
          # is 7 files.
          #number_of_files: 7
      
      
        ### Console output
        # console:
          # Pretty print json event
          #pretty: false
      
      
      ############################# Shipper #########################################
      
      shipper:
        # The name of the shipper that publishes the network data. It can be used to group
        # all the transactions sent by a single shipper in the web interface.
        # If this options is not defined, the hostname is used.
        #name:
      
        # The tags of the shipper are included in their own field with each
        # transaction published. Tags make it easy to group servers by different
        # logical properties.
        #tags: ["service-X", "web-tier"]
      
        # Uncomment the following if you want to ignore transactions created
        # by the server on which the shipper is installed. This option is useful
        # to remove duplicates if shippers are installed on multiple servers.
        #ignore_outgoing: true
      
        # How often (in seconds) shippers are publishing their IPs to the topology map.
        # The default is 10 seconds.
        #refresh_topology_freq: 10
      
        # Expiration time (in seconds) of the IPs published by a shipper to the topology map.
        # All the IPs will be deleted afterwards. Note, that the value must be higher than
        # refresh_topology_freq. The default is 15 seconds.
        #topology_expire: 15
      
        # Internal queue size for single events in processing pipeline
        #queue_size: 1000
      
        # Configure local GeoIP database support.
        # If no paths are not configured geoip is disabled.
        #geoip:
          #paths:
          #  - "/usr/share/GeoIP/GeoLiteCity.dat"
          #  - "/usr/local/var/GeoIP/GeoLiteCity.dat"
      
      
      ############################# Logging #########################################
      
      # There are three options for the log ouput: syslog, file, stderr.
      # Under Windos systems, the log files are per default sent to the file output,
      # under all other system per default to syslog.
      logging:
      
        # Send all logging output to syslog. On Windows default is false, otherwise
        # default is true.
        #to_syslog: true
      
        # Write all logging output to files. Beats automatically rotate files if rotateeverybytes
        # limit is reached.
        #to_files: false
      
        # To enable logging to files, to_files option has to be set to true
        files:
          # The directory where the log files will written to.
          #path: /var/log/mybeat
      
          # The name of the files where the logs are written to.
          #name: mybeat
      
          # Configure log file size limit. If limit is reached, log file will be
          # automatically rotated
          rotateeverybytes: 10485760 # = 10MB
      
          # Number of rotated log files to keep. Oldest files will be deleted first.
          #keepfiles: 7
      
        # Enable debug output for selected components. To enable all selectors use ["*"]
        # Other available selectors are beat, publish, service
        # Multiple selectors can be chained.
        #selectors: [ ]
      
        # Sets log level. The default log level is error.
        # Available log levels are: critical, error, warning, info, debug
        #level: error
      
    • tomcat_log
      $ mkdir -p /home/fusion-ap/tomcat-filebeat-registry
      $ vi ~/tomcat-filebeat.yml
      
    • tomcat-filebeat.yml 內容
      ################### Filebeat Configuration Example #########################
      
      ############################# Filebeat ######################################
      filebeat:
        # List of prospectors to fetch data.
        prospectors:
          # Each - is a prospector. Below are the prospector specific configurations
          #-
            # Paths that should be crawled and fetched. Glob based paths.
            # To fetch all ".log" files from a specific level of subdirectories
            # /var/log/*/*.log can be used.
            # For each file found under this path, a harvester is started.
            # Make sure not file is defined twice as this can lead to unexpected behaviour.
            #paths:
              #- c:\programdata\elasticsearch\logs\*
      
            # Configure the file encoding for reading files with international characters
            # following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
            # Some sample encodings:
            #   plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
            #    hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
            #encoding: plain
      
            # Type of the files. Based on this the way the file is read is decided.
            # The different types cannot be mixed in one prospector
            #
            # Possible options are:
            # * log: Reads every line of the log file (default)
            # * stdin: Reads the standard in
            #input_type: log
      
            # Exclude lines. A list of regular expressions to match. It drops the lines that are
            # matching any regular expression from the list. The include_lines is called before
            # exclude_lines. By default, no lines are dropped.
            # exclude_lines: ["^DBG"]
      
            # Include lines. A list of regular expressions to match. It exports the lines that are
            # matching any regular expression from the list. The include_lines is called before
            # exclude_lines. By default, all the lines are exported.
            # include_lines: ["^ERR", "^WARN"]
      
            # Exclude files. A list of regular expressions to match. Filebeat drops the files that
            # are matching any regular expression from the list. By default, no files are dropped.
            # exclude_files: [".gz$"]
      
            # Optional additional fields. These field can be freely picked
            # to add additional information to the crawled log files for filtering
            #fields:
            #  level: debug
            #  review: 1
      
            # Set to true to store the additional fields as top level fields instead
            # of under the "fields" sub-dictionary. In case of name conflicts with the
            # fields added by Filebeat itself, the custom fields overwrite the default
            # fields.
            #fields_under_root: false
      
            # Ignore files which were modified more then the defined timespan in the past.
            # In case all files on your system must be read you can set this value very large.
            # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
            #ignore_older: 0
      
            # Close older closes the file handler for which were not modified
            # for longer then close_older
            # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
            #close_older: 1h
      
            # Type to be published in the 'type' field. For Elasticsearch output,
            # the type defines the document type these entries should be stored
            # in. Default: log
            #document_type: log
      
            # Scan frequency in seconds.
            # How often these files should be checked for changes. In case it is set
            # to 0s, it is done as often as possible. Default: 10s
            #scan_frequency: 10s
      
            # Defines the buffer size every harvester uses when fetching the file
            #harvester_buffer_size: 16384
      
            # Maximum number of bytes a single log event can have
            # All bytes after max_bytes are discarded and not sent. The default is 10MB.
            # This is especially useful for multiline log messages which can get large.
            #max_bytes: 10485760
      
            # Mutiline can be used for log messages spanning multiple lines. This is common
            # for Java Stack Traces or C-Line Continuation
            #multiline:
      
              # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
              #pattern: ^\[
      
              # Defines if the pattern set under pattern should be negated or not. Default is false.
              #negate: false
      
              # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
              # that was (not) matched before or after or as long as a pattern is not matched based on negate.
              # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
              #match: after
      
              # The maximum number of lines that are combined to one event.
              # In case there are more the max_lines the additional lines are discarded.
              # Default is 500
              #max_lines: 500
      
              # After the defined timeout, an multiline event is sent even if no new pattern was found to start a new event
              # Default is 5s.
              #timeout: 5s
      
            # Setting tail_files to true means filebeat starts readding new files at the end
            # instead of the beginning. If this is used in combination with log rotation
            # this can mean that the first entries of a new file are skipped.
            #tail_files: false
      
            # Backoff values define how agressively filebeat crawls new files for updates
            # The default values can be used in most cases. Backoff defines how long it is waited
            # to check a file again after EOF is reached. Default is 1s which means the file
            # is checked every second if new lines were added. This leads to a near real time crawling.
            # Every time a new line appears, backoff is reset to the initial value.
            #backoff: 1s
      
            # Max backoff defines what the maximum backoff time is. After having backed off multiple times
            # from checking the files, the waiting time will never exceed max_backoff idenependent of the
            # backoff factor. Having it set to 10s means in the worst case a new line can be added to a log
            # file after having backed off multiple times, it takes a maximum of 10s to read the new line
            #max_backoff: 10s
      
            # The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
            # the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
            # The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
            #backoff_factor: 2
      
            # This option closes a file, as soon as the file name changes.
            # This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
            # issues when the file is removed, as the file will not be fully removed until also Filebeat closes
            # the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
            # same name can be created. Turning this feature on the other hand can lead to loss of data
            # on rotate files. It can happen that after file rotation the beginning of the new
            # file is skipped, as the reading starts at the end. We recommend to leave this option on false
            # but lower the ignore_older value to release files faster.
            #force_close_files: false
      
          # tomcat log
          # asset
          -
            paths:
             - /srv/fusion-ap/asset/logs/localhost.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: asset
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          -
            paths:
             - /srv/fusion-ap/asset/logs/catalina.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: asset
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          # asset-rest
          -
            paths:
             - /srv/fusion-ap/asset-rest/logs/localhost.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: asset-rest
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          -
            paths:
             - /srv/fusion-ap/asset-rest/logs/catalina.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: asset-rest
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          # cust-visit
          -
            paths:
             - /srv/fusion-ap/cust-visit/logs/localhost.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: cust-visit
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          -
            paths:
             - /srv/fusion-ap/cust-visit/logs/catalina.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: cust-visit
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          # cust-visit-rest
          -
            paths:
             - /srv/fusion-ap/cust-visit-rest/logs/localhost.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: cust-visit-rest
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          -
            paths:
             - /srv/fusion-ap/cust-visit-rest/logs/catalina.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: cust-visit-rest
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          # provident
          -
            paths:
             - /srv/fusion-ap/provident/logs/localhost.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: provident
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          -
            paths:
             - /srv/fusion-ap/provident/logs/catalina.*.log
            document_type: tomcat_log
            fields:
              catalog: FIN
              system: provident
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          # Additional prospector
          #-
            # Configuration to use stdin input
            #input_type: stdin
      
        # General filebeat configuration options
        #
        # Event count spool threshold - forces network flush if exceeded
        spool_size: 1024
      
        # Enable async publisher pipeline in filebeat (Experimental!)
        #publish_async: false
      
        # Defines how often the spooler is flushed. After idle_timeout the spooler is
        # Flush even though spool_size is not reached.
        idle_timeout: 5s
      
        # Name of the registry file. Per default it is put in the current working
        # directory. In case the working directory is changed after when running
        # filebeat again, indexing starts from the beginning again.
        registry_file: /etc/registry/mark
      
        # Full Path to directory with additional prospector configuration files. Each file must end with .yml
        # These config files must have the full filebeat config part inside, but only
        # the prospector part is processed. All global options like spool_size are ignored.
        # The config_dir MUST point to a different directory then where the main filebeat config file is in.
        #config_dir:
      
      ###############################################################################
      ############################# Libbeat Config ##################################
      # Base config file used by all other beats for using libbeat features
      
      ############################# Output ##########################################
      
      # Configure what outputs to use when sending the data collected by the beat.
      # Multiple outputs may be used.
      output:
      
        ### Elasticsearch as output
        #elasticsearch:
          # Array of hosts to connect to.
          # Scheme and port can be left out and will be set to the default (http and 9200)
          # In case you specify and additional path, the scheme is required: http://localhost:9200/path
          # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
          #hosts: ["192.168.159.95:9200"]
      
          # Optional protocol and basic auth credentials.
          #protocol: "https"
          #username: "admin"
          #password: "s3cr3t"
      
          # Number of workers per Elasticsearch host.
          #worker: 1
      
          # Optional index name. The default is "filebeat" and generates
          # [filebeat-]YYYY.MM.DD keys.
          #index: "filebeat"
      
          # A template is used to set the mapping in Elasticsearch
          # By default template loading is disabled and no template is loaded.
          # These settings can be adjusted to load your own template or overwrite existing ones
          #template:
      
            # Template name. By default the template name is filebeat.
            #name: "filebeat"
      
            # Path to template file
            #path: "filebeat.template.json"
      
            # Overwrite existing template
            #overwrite: false
      
          # Optional HTTP Path
          #path: "/elasticsearch"
      
          # Proxy server url
          #proxy_url: http://proxy:3128
      
          # The number of times a particular Elasticsearch index operation is attempted. If
          # the indexing operation doesn't succeed after this many retries, the events are
          # dropped. The default is 3.
          #max_retries: 3
      
          # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
          # The default is 50.
          #bulk_max_size: 50
      
          # Configure http request timeout before failing an request to Elasticsearch.
          #timeout: 90
      
          # The number of seconds to wait for new events between two bulk API index requests.
          # If `bulk_max_size` is reached before this interval expires, addition bulk index
          # requests are made.
          #flush_interval: 1
      
          # Boolean that sets if the topology is kept in Elasticsearch. The default is
          # false. This option makes sense only for Packetbeat.
          #save_topology: false
      
          # The time to live in seconds for the topology information that is stored in
          # Elasticsearch. The default is 15 seconds.
          #topology_expire: 15
      
          # tls configuration. By default is off.
          #tls:
            # List of root certificates for HTTPS server verifications
            #certificate_authorities: ["/etc/pki/root/ca.pem"]
      
            # Certificate for TLS client authentication
            #certificate: "/etc/pki/client/cert.pem"
      
            # Client Certificate Key
            #certificate_key: "/etc/pki/client/cert.key"
      
            # Controls whether the client verifies server certificates and host name.
            # If insecure is set to true, all server host names and certificates will be
            # accepted. In this mode TLS based connections are susceptible to
            # man-in-the-middle attacks. Use only for testing.
            #insecure: true
      
            # Configure cipher suites to be used for TLS connections
            #cipher_suites: []
      
            # Configure curve types for ECDHE based cipher suites
            #curve_types: []
      
            # Configure minimum TLS version allowed for connection to logstash
            #min_version: 1.0
      
            # Configure maximum TLS version allowed for connection to logstash
            #max_version: 1.2
      
      
        ### Logstash as output
        logstash:
          # The Logstash hosts
          hosts: ["192.168.159.95:5001"]
      
          # Number of workers per Logstash host.
          #worker: 1
      
          # Set gzip compression level.
          #compression_level: 3
      
          # Optional load balance the events between the Logstash hosts
          #loadbalance: true
      
          # Optional index name. The default index name depends on the each beat.
          # For Packetbeat, the default is set to packetbeat, for Topbeat
          # top topbeat and for Filebeat to filebeat.
          index: filebeat
      
          # Optional TLS. By default is off.
          #tls:
            # List of root certificates for HTTPS server verifications
            #certificate_authorities: ["/etc/pki/root/ca.pem"]
      
            # Certificate for TLS client authentication
            #certificate: "/etc/pki/client/cert.pem"
      
            # Client Certificate Key
            #certificate_key: "/etc/pki/client/cert.key"
      
            # Controls whether the client verifies server certificates and host name.
            # If insecure is set to true, all server host names and certificates will be
            # accepted. In this mode TLS based connections are susceptible to
            # man-in-the-middle attacks. Use only for testing.
            #insecure: true
      
            # Configure cipher suites to be used for TLS connections
            #cipher_suites: []
      
            # Configure curve types for ECDHE based cipher suites
            #curve_types: []
      
      
        ### File as output
        #file:
          # Path to the directory where to save the generated files. The option is mandatory.
          #path: "/tmp/filebeat"
      
          # Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
          #filename: filebeat
      
          # Maximum size in kilobytes of each file. When this size is reached, the files are
          # rotated. The default value is 10 MB.
          #rotate_every_kb: 10000
      
          # Maximum number of files under path. When this number of files is reached, the
          # oldest file is deleted and the rest are shifted from last to first. The default
          # is 7 files.
          #number_of_files: 7
      
      
        ### Console output
        # console:
          # Pretty print json event
          #pretty: false
      
      
      ############################# Shipper #########################################
      
      shipper:
        # The name of the shipper that publishes the network data. It can be used to group
        # all the transactions sent by a single shipper in the web interface.
        # If this options is not defined, the hostname is used.
        #name:
      
        # The tags of the shipper are included in their own field with each
        # transaction published. Tags make it easy to group servers by different
        # logical properties.
        #tags: ["service-X", "web-tier"]
      
        # Uncomment the following if you want to ignore transactions created
        # by the server on which the shipper is installed. This option is useful
        # to remove duplicates if shippers are installed on multiple servers.
        #ignore_outgoing: true
      
        # How often (in seconds) shippers are publishing their IPs to the topology map.
        # The default is 10 seconds.
        #refresh_topology_freq: 10
      
        # Expiration time (in seconds) of the IPs published by a shipper to the topology map.
        # All the IPs will be deleted afterwards. Note, that the value must be higher than
        # refresh_topology_freq. The default is 15 seconds.
        #topology_expire: 15
      
        # Internal queue size for single events in processing pipeline
        #queue_size: 1000
      
        # Configure local GeoIP database support.
        # If no paths are not configured geoip is disabled.
        #geoip:
          #paths:
          #  - "/usr/share/GeoIP/GeoLiteCity.dat"
          #  - "/usr/local/var/GeoIP/GeoLiteCity.dat"
      
      
      ############################# Logging #########################################
      
      # There are three options for the log ouput: syslog, file, stderr.
      # Under Windos systems, the log files are per default sent to the file output,
      # under all other system per default to syslog.
      logging:
      
        # Send all logging output to syslog. On Windows default is false, otherwise
        # default is true.
        #to_syslog: true
      
        # Write all logging output to files. Beats automatically rotate files if rotateeverybytes
        # limit is reached.
        #to_files: false
      
        # To enable logging to files, to_files option has to be set to true
        files:
          # The directory where the log files will written to.
          #path: /var/log/mybeat
      
          # The name of the files where the logs are written to.
          #name: mybeat
      
          # Configure log file size limit. If limit is reached, log file will be
          # automatically rotated
          rotateeverybytes: 10485760 # = 10MB
      
          # Number of rotated log files to keep. Oldest files will be deleted first.
          #keepfiles: 7
      
        # Enable debug output for selected components. To enable all selectors use ["*"]
        # Other available selectors are beat, publish, service
        # Multiple selectors can be chained.
        #selectors: [ ]
      
        # Sets log level. The default log level is error.
        # Available log levels are: critical, error, warning, info, debug
        #level: error
      
    • payara.log
      $ mkdir -p /home/fusion-ap/payara-filebeat-registry
      $ vi ~/payara-filebeat.yml
      
    • payara-filebeat.yml內容
      ################### Filebeat Configuration Example #########################
      
      ############################# Filebeat ######################################
      filebeat:
        # List of prospectors to fetch data.
        prospectors:
          # Each - is a prospector. Below are the prospector specific configurations
          #-
            # Paths that should be crawled and fetched. Glob based paths.
            # To fetch all ".log" files from a specific level of subdirectories
            # /var/log/*/*.log can be used.
            # For each file found under this path, a harvester is started.
            # Make sure not file is defined twice as this can lead to unexpected behaviour.
            #paths:
              #- c:\programdata\elasticsearch\logs\*
      
            # Configure the file encoding for reading files with international characters
            # following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
            # Some sample encodings:
            #   plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
            #    hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
            #encoding: plain
      
            # Type of the files. Based on this the way the file is read is decided.
            # The different types cannot be mixed in one prospector
            #
            # Possible options are:
            # * log: Reads every line of the log file (default)
            # * stdin: Reads the standard in
            #input_type: log
      
            # Exclude lines. A list of regular expressions to match. It drops the lines that are
            # matching any regular expression from the list. The include_lines is called before
            # exclude_lines. By default, no lines are dropped.
            # exclude_lines: ["^DBG"]
      
            # Include lines. A list of regular expressions to match. It exports the lines that are
            # matching any regular expression from the list. The include_lines is called before
            # exclude_lines. By default, all the lines are exported.
            # include_lines: ["^ERR", "^WARN"]
      
            # Exclude files. A list of regular expressions to match. Filebeat drops the files that
            # are matching any regular expression from the list. By default, no files are dropped.
            # exclude_files: [".gz$"]
      
            # Optional additional fields. These field can be freely picked
            # to add additional information to the crawled log files for filtering
            #fields:
            #  level: debug
            #  review: 1
      
            # Set to true to store the additional fields as top level fields instead
            # of under the "fields" sub-dictionary. In case of name conflicts with the
            # fields added by Filebeat itself, the custom fields overwrite the default
            # fields.
            #fields_under_root: false
      
            # Ignore files which were modified more then the defined timespan in the past.
            # In case all files on your system must be read you can set this value very large.
            # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
            #ignore_older: 0
      
            # Close older closes the file handler for which were not modified
            # for longer then close_older
            # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
            #close_older: 1h
      
            # Type to be published in the 'type' field. For Elasticsearch output,
            # the type defines the document type these entries should be stored
            # in. Default: log
            #document_type: log
      
            # Scan frequency in seconds.
            # How often these files should be checked for changes. In case it is set
            # to 0s, it is done as often as possible. Default: 10s
            #scan_frequency: 10s
      
            # Defines the buffer size every harvester uses when fetching the file
            #harvester_buffer_size: 16384
      
            # Maximum number of bytes a single log event can have
            # All bytes after max_bytes are discarded and not sent. The default is 10MB.
            # This is especially useful for multiline log messages which can get large.
            #max_bytes: 10485760
      
            # Mutiline can be used for log messages spanning multiple lines. This is common
            # for Java Stack Traces or C-Line Continuation
            #multiline:
      
              # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
              #pattern: ^\[
      
              # Defines if the pattern set under pattern should be negated or not. Default is false.
              #negate: false
      
              # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
              # that was (not) matched before or after or as long as a pattern is not matched based on negate.
              # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
              #match: after
      
              # The maximum number of lines that are combined to one event.
              # In case there are more the max_lines the additional lines are discarded.
              # Default is 500
              #max_lines: 500
      
              # After the defined timeout, an multiline event is sent even if no new pattern was found to start a new event
              # Default is 5s.
              #timeout: 5s
      
            # Setting tail_files to true means filebeat starts readding new files at the end
            # instead of the beginning. If this is used in combination with log rotation
            # this can mean that the first entries of a new file are skipped.
            #tail_files: false
      
            # Backoff values define how agressively filebeat crawls new files for updates
            # The default values can be used in most cases. Backoff defines how long it is waited
            # to check a file again after EOF is reached. Default is 1s which means the file
            # is checked every second if new lines were added. This leads to a near real time crawling.
            # Every time a new line appears, backoff is reset to the initial value.
            #backoff: 1s
      
            # Max backoff defines what the maximum backoff time is. After having backed off multiple times
            # from checking the files, the waiting time will never exceed max_backoff idenependent of the
            # backoff factor. Having it set to 10s means in the worst case a new line can be added to a log
            # file after having backed off multiple times, it takes a maximum of 10s to read the new line
            #max_backoff: 10s
      
            # The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
            # the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
            # The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
            #backoff_factor: 2
      
            # This option closes a file, as soon as the file name changes.
            # This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
            # issues when the file is removed, as the file will not be fully removed until also Filebeat closes
            # the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
            # same name can be created. Turning this feature on the other hand can lead to loss of data
            # on rotate files. It can happen that after file rotation the beginning of the new
            # file is skipped, as the reading starts at the end. We recommend to leave this option on false
            # but lower the ignore_older value to release files faster.
            #force_close_files: false
      
          # payara log
          # financial
          -
            paths:
             - /srv/fusion-ap/financial/logs/server.log*
            document_type: payara_log
            fields:
              catalog: FIN
              system: financial
            harvester_buffer_size: 1024
            tail_files: false
            backoff: 1s
      
          # Additional prospector
          #-
            # Configuration to use stdin input
            #input_type: stdin
      
        # General filebeat configuration options
        #
        # Event count spool threshold - forces network flush if exceeded
        spool_size: 1024
      
        # Enable async publisher pipeline in filebeat (Experimental!)
        #publish_async: false
      
        # Defines how often the spooler is flushed. After idle_timeout the spooler is
        # Flush even though spool_size is not reached.
        idle_timeout: 5s
      
        # Name of the registry file. Per default it is put in the current working
        # directory. In case the working directory is changed after when running
        # filebeat again, indexing starts from the beginning again.
        registry_file: /etc/registry/mark
      
        # Full Path to directory with additional prospector configuration files. Each file must end with .yml
        # These config files must have the full filebeat config part inside, but only
        # the prospector part is processed. All global options like spool_size are ignored.
        # The config_dir MUST point to a different directory then where the main filebeat config file is in.
        #config_dir:
      
      ###############################################################################
      ############################# Libbeat Config ##################################
      # Base config file used by all other beats for using libbeat features
      
      ############################# Output ##########################################
      
      # Configure what outputs to use when sending the data collected by the beat.
      # Multiple outputs may be used.
      output:
      
        ### Elasticsearch as output
        #elasticsearch:
          # Array of hosts to connect to.
          # Scheme and port can be left out and will be set to the default (http and 9200)
          # In case you specify and additional path, the scheme is required: http://localhost:9200/path
          # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
          #hosts: ["192.168.159.95:9200"]
      
          # Optional protocol and basic auth credentials.
          #protocol: "https"
          #username: "admin"
          #password: "s3cr3t"
      
          # Number of workers per Elasticsearch host.
          #worker: 1
      
          # Optional index name. The default is "filebeat" and generates
          # [filebeat-]YYYY.MM.DD keys.
          #index: "filebeat"
      
          # A template is used to set the mapping in Elasticsearch
          # By default template loading is disabled and no template is loaded.
          # These settings can be adjusted to load your own template or overwrite existing ones
          #template:
      
            # Template name. By default the template name is filebeat.
            #name: "filebeat"
      
            # Path to template file
            #path: "filebeat.template.json"
      
            # Overwrite existing template
            #overwrite: false
      
          # Optional HTTP Path
          #path: "/elasticsearch"
      
          # Proxy server url
          #proxy_url: http://proxy:3128
      
          # The number of times a particular Elasticsearch index operation is attempted. If
          # the indexing operation doesn't succeed after this many retries, the events are
          # dropped. The default is 3.
          #max_retries: 3
      
          # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
          # The default is 50.
          #bulk_max_size: 50
      
          # Configure http request timeout before failing an request to Elasticsearch.
          #timeout: 90
      
          # The number of seconds to wait for new events between two bulk API index requests.
          # If `bulk_max_size` is reached before this interval expires, addition bulk index
          # requests are made.
          #flush_interval: 1
      
          # Boolean that sets if the topology is kept in Elasticsearch. The default is
          # false. This option makes sense only for Packetbeat.
          #save_topology: false
      
          # The time to live in seconds for the topology information that is stored in
          # Elasticsearch. The default is 15 seconds.
          #topology_expire: 15
      
          # tls configuration. By default is off.
          #tls:
            # List of root certificates for HTTPS server verifications
            #certificate_authorities: ["/etc/pki/root/ca.pem"]
      
            # Certificate for TLS client authentication
            #certificate: "/etc/pki/client/cert.pem"
      
            # Client Certificate Key
            #certificate_key: "/etc/pki/client/cert.key"
      
            # Controls whether the client verifies server certificates and host name.
            # If insecure is set to true, all server host names and certificates will be
            # accepted. In this mode TLS based connections are susceptible to
            # man-in-the-middle attacks. Use only for testing.
            #insecure: true
      
            # Configure cipher suites to be used for TLS connections
            #cipher_suites: []
      
            # Configure curve types for ECDHE based cipher suites
            #curve_types: []
      
            # Configure minimum TLS version allowed for connection to logstash
            #min_version: 1.0
      
            # Configure maximum TLS version allowed for connection to logstash
            #max_version: 1.2
      
      
        ### Logstash as output
        logstash:
          # The Logstash hosts
          hosts: ["192.168.159.95:5002"]
      
          # Number of workers per Logstash host.
          #worker: 1
      
          # Set gzip compression level.
          #compression_level: 3
      
          # Optional load balance the events between the Logstash hosts
          #loadbalance: true
      
          # Optional index name. The default index name depends on the each beat.
          # For Packetbeat, the default is set to packetbeat, for Topbeat
          # top topbeat and for Filebeat to filebeat.
          index: filebeat
      
          # Optional TLS. By default is off.
          #tls:
            # List of root certificates for HTTPS server verifications
            #certificate_authorities: ["/etc/pki/root/ca.pem"]
      
            # Certificate for TLS client authentication
            #certificate: "/etc/pki/client/cert.pem"
      
            # Client Certificate Key
            #certificate_key: "/etc/pki/client/cert.key"
      
            # Controls whether the client verifies server certificates and host name.
            # If insecure is set to true, all server host names and certificates will be
            # accepted. In this mode TLS based connections are susceptible to
            # man-in-the-middle attacks. Use only for testing.
            #insecure: true
      
            # Configure cipher suites to be used for TLS connections
            #cipher_suites: []
      
            # Configure curve types for ECDHE based cipher suites
            #curve_types: []
      
      
        ### File as output
        #file:
          # Path to the directory where to save the generated files. The option is mandatory.
          #path: "/tmp/filebeat"
      
          # Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
          #filename: filebeat
      
          # Maximum size in kilobytes of each file. When this size is reached, the files are
          # rotated. The default value is 10 MB.
          #rotate_every_kb: 10000
      
          # Maximum number of files under path. When this number of files is reached, the
          # oldest file is deleted and the rest are shifted from last to first. The default
          # is 7 files.
          #number_of_files: 7
      
      
        ### Console output
        # console:
          # Pretty print json event
          #pretty: false
      
      
      ############################# Shipper #########################################
      
      shipper:
        # The name of the shipper that publishes the network data. It can be used to group
        # all the transactions sent by a single shipper in the web interface.
        # If this options is not defined, the hostname is used.
        #name:
      
        # The tags of the shipper are included in their own field with each
        # transaction published. Tags make it easy to group servers by different
        # logical properties.
        #tags: ["service-X", "web-tier"]
      
        # Uncomment the following if you want to ignore transactions created
        # by the server on which the shipper is installed. This option is useful
        # to remove duplicates if shippers are installed on multiple servers.
        #ignore_outgoing: true
      
        # How often (in seconds) shippers are publishing their IPs to the topology map.
        # The default is 10 seconds.
        #refresh_topology_freq: 10
      
        # Expiration time (in seconds) of the IPs published by a shipper to the topology map.
        # All the IPs will be deleted afterwards. Note, that the value must be higher than
        # refresh_topology_freq. The default is 15 seconds.
        #topology_expire: 15
      
        # Internal queue size for single events in processing pipeline
        #queue_size: 1000
      
        # Configure local GeoIP database support.
        # If no paths are not configured geoip is disabled.
        #geoip:
          #paths:
          #  - "/usr/share/GeoIP/GeoLiteCity.dat"
          #  - "/usr/local/var/GeoIP/GeoLiteCity.dat"
      
      
      ############################# Logging #########################################
      
      # There are three options for the log ouput: syslog, file, stderr.
      # Under Windos systems, the log files are per default sent to the file output,
      # under all other system per default to syslog.
      logging:
      
        # Send all logging output to syslog. On Windows default is false, otherwise
        # default is true.
        #to_syslog: true
      
        # Write all logging output to files. Beats automatically rotate files if rotateeverybytes
        # limit is reached.
        #to_files: false
      
        # To enable logging to files, to_files option has to be set to true
        files:
          # The directory where the log files will written to.
          #path: /var/log/mybeat
      
          # The name of the files where the logs are written to.
          #name: mybeat
      
          # Configure log file size limit. If limit is reached, log file will be
          # automatically rotated
          rotateeverybytes: 10485760 # = 10MB
      
          # Number of rotated log files to keep. Oldest files will be deleted first.
          #keepfiles: 7
      
        # Enable debug output for selected components. To enable all selectors use ["*"]
        # Other available selectors are beat, publish, service
        # Multiple selectors can be chained.
        #selectors: [ ]
      
        # Sets log level. The default log level is error.
        # Available log levels are: critical, error, warning, info, debug
        #level: error<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>
      
  2. create sh script && run it !
    1. 新增 run script 到 home 目錄下:
      $ cd ~
      $ vi run_filebeat.sh
      
    2. run_filebeat.sh 內容:
      #!/bin/bash
      
      docker rm -f fusion-filebeat
      docker rm -f tomcat-filebeat
      docker rm -f payara-filebeat
      
      docker run --name fusion-filebeat -itd \
      --restart=always \
      -v /home/core/fusion-filebeat.yml:/filebeat.yml \
      -v /home/fusion-ap/fusion-filebeat-registry:/etc/registry \
      -v /home/fusion-ap:/srv/fusion-ap \
      prima/filebeat:5.1.1
      
      docker run --name tomcat-filebeat -itd \
      --restart=always \
      -v /home/core/tomcat-filebeat.yml:/filebeat.yml \
      -v /home/fusion-ap/tomcat-filebeat-registry:/etc/registry \
      -v /home/fusion-ap:/srv/fusion-ap \
      prima/filebeat:5.1.1
      
      docker run --name payara-filebeat -itd \
      --restart=always \
      -v /home/core/payara-filebeat.yml:/filebeat.yml \
      -v /home/fusion-ap/payara-filebeat-registry:/etc/registry \
      -v /home/fusion-ap:/srv/fusion-ap \
      prima/filebeat:5.1.1
      
    3. run it!
      $ sudo chmod u+x run_filebeat.sh<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>
      $ ./run_filebeat.sh
      

done

 


這裡有透過 docker-compose 安裝在 ubuntu 16.04 的 ELK 包,需要者可以下載來使用:

https://github.com/eden90267/docker-elk

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *

這個網站採用 Akismet 服務減少垃圾留言。進一步了解 Akismet 如何處理網站訪客的留言資料