API docs

class tapes.registry.Registry[source]

Factory and storage location for all metrics stuff.

Use producer methods to create metrics. Metrics are hierarchical, the names are split on ‘.’.


Creates or gets an existing counter.

Parameters:name – The name
Returns:The created or existing counter for the given name
gauge(name, producer)[source]

Creates or gets an existing gauge.

Parameters:name – The name
Returns:The created or existing gauge for the given name

Retrieves the current values of the metrics associated with this registry, formatted as a dict.

The metrics form a hierarchy, their names are split on ‘.’. The returned dict is an addict, so you can use it as either a regular dict or via attributes, e.g.,

>>> import tapes
>>> registry = tapes.Registry()
>>> timer = registry.timer('my.timer')
>>> stats = registry.get_stats()
>>> print(stats['my']['timer']['count'])
>>> print(stats.my.timer.count)
Returns:The values of the metrics associated with this registry

Creates or gets an existing histogram.

Parameters:name – The name
Returns:The created or existing histogram for the given name

Creates or gets an existing meter.

Parameters:name – The name
Returns:The created or existing meter for the given name

Creates or gets an existing timer.

Parameters:name – The name
Returns:The created or existing timer for the given name
tapes.meta.metered_meta(metrics, base=<type 'type'>)[source]

Creates a metaclass that will add the specified metrics at a path parametrized on the dynamic class name.

Prime use case is for base classes if all subclasses need separate metrics and / or the metrics need to be used in base class methods, e.g., Tornado’s RequestHandler like:

import tapes
import tornado
import abc

registry = tapes.Registry()

class MyCommonBaseHandler(tornado.web.RequestHandler):
    __metaclass__ = metered_meta([
        ('latency', 'my.http.endpoints.{}.latency', registry.timer)
    ], base=abc.ABCMeta)

    def get(self, *args, **kwargs):
        with self.latency.time():
            yield self.get_impl(*args, **kwargs)

    def get_impl(self, *args, **kwargs):

class MyImplHandler(MyCommonBaseHandler):
    def get_impl(self, *args, **kwargs):
        self.finish({'stuff': 'something'})

class MyOtherImplHandler(MyCommonBaseHandler):
    def get_impl(self, *args, **kwargs):
        self.finish({'other stuff': 'more of something'})
This would produce two different relevant metrics,
  • my.http.endpoints.MyImplHandler.latency
  • my.http.endpoints.MyOtherImplHandler.latency

and, as an unfortunate side effect of adding it in the base class, a my.http.endpoints.MyCommonBaseHandler.latency too.

  • metrics – list of (attr_name, metrics_path_template, metrics_factory)
  • base – optional meta base if other than type

a metaclass that populates the class with the needed metrics at paths based on the dynamic class name

class tapes.reporting.ScheduledReporter(interval, registry=None)[source]

Super class for scheduled reporters. Handles scheduling via a Thread.


Override in subclasses.

A Python Thread is used for scheduling, so whatever this ends up doing, it should be pretty fast.

class tapes.reporting.http.HTTPReporter(port, registry=None)[source]

Exposes metrics via HTTP.

For web applications, you should almost certainly just use your existing framework’s capabilities. This is for applications that don’t have HTTP easily available.

class tapes.reporting.statsd.StatsdReporter(interval, host='localhost', port=8125, prefix=None, registry=None)[source]

Reporter for StatsD.

class tapes.reporting.stream.ThreadedStreamReporter(interval, stream=<open file '<stdout>', mode 'w'>, registry=None)[source]

Dumps JSON serialized metrics to a stream with an interval

class tapes.reporting.tornado.TornadoScheduledReporter(interval, registry=None, io_loop=None)[source]

Scheduled reporter that uses a tornado IOLoop for scheduling

class tapes.reporting.tornado.statsd.TornadoStatsdReporter(interval, host='localhost', port=8125, prefix=None, registry=None)[source]

Reports to StatsD using an IOLoop for scheduling

class tapes.reporting.tornado.stream.TornadoStreamReporter(interval, stream=<open file '<stdout>', mode 'w'>, registry=None, io_loop=None)[source]

Writes JSON serialized metrics to a stream using an IOLoop for scheduling

class tapes.distributed.registry.DistributedRegistry(socket_addr='ipc://tapes_metrics.ipc')[source]

A registry proxy that pushes metrics data to a RegistryAggregator.


Connects to the 0MQ socket and starts publishing.

class tapes.distributed.registry.RegistryAggregator(reporter, socket_addr='ipc://tapes_metrics.ipc')[source]

Aggregates multiple registry proxies and reports on the unified metrics.


Starts the registry aggregator.

Parameters:fork – whether to fork a process; if False, blocks and stays in the existing process

Terminates the forked process.

Only valid if started as a fork, because... well you wouldn’t get here otherwise. :return: