| 
 | ||||||||||
| PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES | |||||||||
See:
          Description
| Interface Summary | |
|---|---|
| MetricsBuilder | The metrics builder interface | 
| MetricsPlugin | A fairly generic plugin interface | 
| MetricsRecord | An immutable snapshot of metrics with a timestamp | 
| MetricsSink | The metrics sink interface | 
| MetricsSource | The metrics source interface | 
| MetricsSystem | The metrics system interface | 
| MetricsSystem.Callback | The metrics system callback interface | 
| MetricsSystemMXBean | The JMX interface to the metrics system | 
| MetricsVisitor | A visitor interface for metrics | 
| Class Summary | |
|---|---|
| Metric | The immutable metric | 
| MetricCounter<T extends Number> | A generic immutable counter metric type | 
| MetricGauge<T extends Number> | A generic immutable gauge metric | 
| MetricsFilter | The metrics filter interface | 
| MetricsRecordBuilder | The metrics record builder interface | 
| MetricsSystem.AbstractCallback | Convenient abstract class for implementing callback interface | 
| MetricsTag | Immutable tag for metrics (for grouping on host/queue/username etc.) | 
| Exception Summary | |
|---|---|
| MetricsException | A general metrics exception wrapper | 
This package provides a framework for metrics instrumentation and publication.
The instrumentation of metrics just need to implement the simple
      MetricsSource interface with a single getMetrics
      method; The consumers of metrics just need to implement the simple
      MetricsSink interface with a putMetrics
      method along with the init and flush methods.
      Producers register the metrics
      sources with a metrics system, while consumers register the sinks. A
      default metrics system is provided to marshal metrics from sources to
      sinks based on (per source/sink) configuration options. Metrics
      from getMetrics would also be published and queryable via
      the standard JMX mechanism. This document targets the framework
      users. Framework developers could consult the
      design
      document for architecture and implementation notes.
    
org.apache.hadoop.metrics2.implorg.apache.hadoop.metrics2.libMetricMutable[Gauge*|Counter*|Stat] and
        MetricsRegistry.
      org.apache.hadoop.metrics2.filterGlobFilter and RegexFilter.
      org.apache.hadoop.metrics2.sourceJvmMetricsSource.
      org.apache.hadoop.metrics2.sinkFileSink.
      Here is a simple MetricsSource:
    class MyMetrics implements MetricsSource {
      public void getMetrics(MetricsBuilder builder, boolean all) {
        builder.addRecord("myRecord").setContext("myContext")
               .addGauge("myMetric", "My metrics description", 42);
      }
    }
    In this example there are three names:
Note, the boolean argument all, if true, means that the
      source should send all the metrics it defines, even if the metrics
      are unchanged since last getMetrics call. This enable an
      optimization for less copying for metrics that don't change much
      (total capacity of something etc. which only changes when new
      resources (nodes or disks) are being added.)
    
Here is a simple MetricsSink:
    public class MySink implements MetricsSink {
      public void putMetrics(MetricsRecord record) {
        System.out.print(record);
      }
      public void init(SubsetConfiguration conf) {}
      public void flush() {}
    }
    In this example there are three additional concepts:
In order to make use our MyMetrics and MySink,
      they need to be hooked up to a metrics system. In this case (and most
      cases), the DefaultMetricsSystem would suffice.
    
    DefaultMetricsSystem.initialize("test"); // called once per application
    DefaultMetricsSystem.INSTANCE.register("MyMetrics", "my metrics description",
                                           new MyMetrics());
    Sinks are usually specified in a configuration file, say, "hadoop-metrics2-test.properties", as:
    test.sink.mysink0.class=com.example.hadoop.metrics.MySink
    The configuration syntax is:
    [prefix].[source|sink|jmx|].[instance].[option]
    In the previous example, test is the prefix and
      mysink0 is an instance name.
      DefaultMetricsSystem would try to load
      hadoop-metrics2-[prefix].properties first, and if not found,
      try the default hadoop-metrics2.properties in the class path.
      Note, the [instance] is an arbitrary name to uniquely
      identify a particular sink instance. The asterisk (*) can be
      used to specify default options.
    
Consult the metrics instrumentation in
      JvmMetricsSource,
      RpcInstrumentation, etc.
      for more examples.
    
One of the features of the default metrics system is metrics filtering configuration by source, context, record/tags and metrics. The least expensive way to filter out metrics would be at the source level, e.g., filtering out source named "MyMetrics". The most expensive way would be per metric filtering.
Here are some examples:
    test.sink.file0.class=org.apache.hadoop.metrics2.sink.FileSink
    test.sink.file0.context=foo
    In this example, we configured one sink instance that would
      accept metrics from context foo only.
    
    *.source.filter.class=org.apache.hadoop.metrics2.filter.GlobFilter
    test.*.source.filter.include=foo
    test.*.source.filter.exclude=bar
    In this example, we specify a source filter that includes source
      foo and excludes bar. When only include
      patterns are specified, the filter operates in the white listing mode,
      where only matched sources are included. Likewise, when only exclude
      patterns are specified, only matched sources are excluded. Sources that
      are not matched in either patterns are included as well when both patterns
      are present. Note, the include patterns have precedence over the exclude
      patterns.
    
Similarly, you can specify the record.filter and
      metrics.filter options, which operate at record and metric
      level, respectively. Filters can be combined to optimize
      the filtering efficiency.
    class MyMetrics extends MyInstrumentation implements MetricsSource {
      final MetricsRegistry registry = new MetricsRegistry("myRecord");
      final MetricMutableGaugeInt gauge0 =
          registry.newGauge("myGauge", "my gauge description", 0);
      final MetricMutableCounterLong counter0 =
          registry.newCounter("myCounter", "my metric description", 0L);
      final MetricMutaleStat stat0 =
          registry.newStat("myStat", "my stat description", "ops", "time");
      @Override public void setGauge0(int value) { gauge0.set(value); }
      @Override public void incrCounter0() { counter0.incr(); }
      @Override public void addStat0(long elapsed) { stat0.add(elapsed); }
      public void getMetrics(MetricsBuilder builder, boolean all) {
        registry.snapshot(builder.addRecord(registry.name()), all);
      }
    }
    
    Note, in this example we introduced the following:
    snapshot. The MetricMutableStat
        in particular, provides a way to measure latency and throughput of an
        operation. In this particular case, it produces a long counter
        "myStat_num_ops" and double gauge "myStat_avg_time" when snapshotted.
      Users of the previous metrics system would notice the lack of
      context prefix in the configuration examples. The new
      metrics system decouples the concept for context (for grouping) with the
      implementation where a particular context object does the updating and
      publishing of metrics, which causes problems when you want to have a
      single context to be consumed by multiple backends. You would also have to
      configure an implementation instance per context, even if you have a
      backend that can handle multiple contexts (file, gangalia etc.):
    
| Before | After | 
|---|---|
| 
    context1.class=org.hadoop.metrics.file.FileContext
    context2.class=org.hadoop.metrics.file.FileContext
    ...
    contextn.class=org.hadoop.metrics.file.FileContext | 
    myprefix.sink.file.class=org.hadoop.metrics2.sink.FileSink | 
In the new metrics system, you can simulate the previous behavior by using the context option in the sink options like the following:
| Before | After | 
|---|---|
| 
    context0.class=org.hadoop.metrics.file.FileContext
    context0.fileName=context0.out
    context1.class=org.hadoop.metrics.file.FileContext
    context1.fileName=context1.out
    ...
    contextn.class=org.hadoop.metrics.file.FileContext
    contextn.fileName=contextn.out | 
    myprefix.sink.*.class=org.apache.hadoop.metrics2.sink.FileSink
    myprefix.sink.file0.context=context0
    myprefix.sink.file0.filename=context1.out
    myprefix.sink.file1.context=context1
    myprefix.sink.file1.filename=context1.out
    ...
    myprefix.sink.filen.context=contextn
    myprefix.sink.filen.filename=contextn.out | 
to send metrics of a particular context to a particular backend. Note,
      myprefix is an arbitrary prefix for configuration groupings,
      typically they are the name of a particular process
      (namenode, jobtracker, etc.)
    
| 
 | ||||||||||
| PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES | |||||||||