您的位置:首页 > 运维架构 > Linux

Lttng document

2015-08-05 14:44 751 查看

The LTTng Documentation

Copyright © 2014 The LTTng Project

This work is licensed under a
Creative Commons Attribution 4.0 International License.

Welcome!

Welcome to the LTTng Documentation!

The Linux Trace Toolkit: next generationis an open source system software package for correlated tracing of theLinux kernel, user applications and libraries. LTTng consists of kernelmodules (for Linux kernel tracing) and dynamically loaded libraries
(foruser application and library tracing). It is controlled by a sessiondaemon, which receives commands from a command line interface.

Convention

Function and argument names, variable names, command names,file system paths, file names and other precise strings are writtenusing a
monospaced typeface
in this document. An
italic word
within such a block is aplaceholder,
usually described in the following sentence.

Practical tips and sidenotes are given throughout the document using ablue background:

Tip:Make sure you read the tips.

Terminal boxes are used to show command lines:

echo This is a terminal box

Typical command prompts, like
$
and
#
, are not shown in terminalboxes to make copy/paste operations easier, especially for multilinecommands which may be copied and pasted as is in a user's terminal.Commands to be executed as a
root user begin with
sudo
.

Target audience

The material of this documentation is appropriate for intermediate toadvanced software developers working in a Linux environment who areinterested in efficient software tracing. LTTng may also be worth atry for students interested in the inner mechanics
of their systems.

Readers who do not have a programming background may wish to skipeverything related to instrumentation, which requires, most of thetime, some C language skills.

Note to readers:This is an opendocumentation: its source is available in apublic Gitrepository. Should you find any error
in the contents of this text,any grammatical mistake or any dead link, we would be very grateful ifyou would fill a GitHub issue for it or, even better, contribute a patchto this documentation using a GitHub pull request.

Chapter descriptions

What follows is a list of brief descriptions of this documentation'schapters. The latter are ordered in such a way as to make the readingas linear as possible.

Nuts and bolts explains therudiments of software tracing and the rationale behind theLTTng project.
Installing LTTng is divided intosections describing the steps needed to get a working installationof LTTng packages for common Linux distributions and from itssource.
Getting started is a very concise guide toget started quickly with LTTng kernel and user space tracing. Thischapter is recommended if you're new to LTTng or software
tracingin general.
Understanding LTTng deals with somecore concepts and components of the LTTng suite. Understandingthose is important since the next chapter assumes you're familiarwith
them.
Using LTTng is a complete user guide of theLTTng project. It shows in great details how to instrument userapplications and the Linux kernel, how to control tracing sessionsusing
the
lttng
command line tool and miscellaneous practical usecases.
Reference contains references of LTTng components,like links to online manpages and various APIs.

We recommend that you read the above chapters in this order, althoughsome of them may be skipped depending on your situation. You may skipNuts and bolts if you're familiar
with tracingand LTTng. Also, you may jump overInstalling LTTng if LTTng is already properlyinstalled on your target system.

Acknowledgements

A few people made the online LTTng Documentation possible.

Philippe Proulx wrote and formatted most of the text.Daniel U. Thibault, from theDRDC,wrote an open guide calledLTTng: The Linux Trace Toolkit NextGeneration — A Comprehensive User's Guide
(version 2.3edition) which was mostly used to complete parts of theUnderstanding LTTng chapter and for a fewpassages here and there.The wholeEfficiOSteam
(Christian Babeux, Antoine Busque, Julien Desfossez,Mathieu Desnoyers, Jérémie Galarneau and David Goulet) made essentialreviews of the whole document.

We sincerely thank everyone who helped make this documentation whatit is. We hope you enjoy reading it as much as we did writing it.

What's new in LTTng 2.6?

Most of the changes of LTTng 2.6 are bug fixes, making the toolchainmore stable than ever before. Still, LTTng 2.6 adds some interestingfeatures.

LTTng 2.5 already supported the instrumentation and tracing ofJava applications through
java.util.logging
(JUL). LTTng 2.6 goes one step further by supportingApache
log4j 1.2.The new log4j domain is selected using the
--log4j
option in variouscommands of the
lttng
tool.

LTTng-modules has supported system call tracing for a long time,but until now, it was only possible to record either all of them,or none of them. LTTng 2.6 allows the user to record only a specificsubset of system call events, e.g.:

lttng enable-event --kernel --syscall open,fork,chdir,pipe

Finally, the
lttng
command line tool cannot only communicate withhumans as it used to do, but also with machines thanks to its newmachine interface feature.

To learn more about the new features of LTTng 2.6, seethis release announcement.

Nuts and bolts

What is LTTng? As its name suggests, theLinux Trace Toolkit: next generation is a modern toolkit fortracing Linux systems and applications. So your first question mightrather be:what is tracing?

As the history of software engineering progressed and led to whatwe now take for granted—complex, numerous andinterdependent software applications running in parallel onsophisticated operating systems like Linux—the authors of suchcomponents, or softwaredevelopers, began feeling a naturalurge of having tools to ensure the robustness and good performanceof their masterpieces.

One major achievement in this field is, inarguably, theGNU debugger(GDB), which is an essential tool for developers to find and fixbugs. But even the best debugger
won't help make your software runfaster, and nowadays, faster software means either more work done bythe same hardware, or cheaper hardware for the same work.

A profiler is often the tool of choice to identify performancebottlenecks. Profiling is suitable to identifywhere performance islost in a given software; the profiler outputs a profile, astatistical summary of observed events, which you
may use to knowwhich functions took the most time to execute. However, a profilerwon't reportwhy some identified functions are the bottleneck.Also, bottlenecks might only occur when specific conditions are met.For a thorough investigation of software
performance issues, a historyof execution, with historical values of chosen variables, isessential. This is where tracing comes in handy.

Tracing is a technique used to understand what goes on in a runningsoftware system. The software used for tracing is called atracer,which is conceptually similar to a tape recorder. When recording,specific points placed in the softwaresource code generate eventsthat are saved on a giant tape: atrace file. Both user applicationsand the operating system may be traced at the same time, opening thepossibility of resolving a wide range of problems that are otherwiseextremely challenging.

Tracing is often compared to logging. However, tracers and loggersare two different types of tools, serving two different purposes. Tracers aredesigned to record much lower-level events that occur much morefrequently than log messages, often in
the thousands per second range,with very little execution overhead. Logging is more appropriate forvery high-level analysis of less frequent events: user accesses,exceptional conditions (e.g., errors, warnings), databasetransactions, instant messaging communications,
etc. More formally,logging is one of several use cases that can be accomplished withtracing.

The list of recorded events inside a trace file may be read manuallylike a log file for the maximum level of detail, but it is generallymuch more interesting to perform application-specific analyses toproduce reduced statistics and graphs that are useful
to resolve agiven problem. Trace viewers and analysers are specialized tools whichachieve this.

So, in the end, this is what LTTng is: a powerful, open source set oftools to trace the Linux kernel and user applications. LTTng iscomposed of several components actively maintained and developed byitscommunity.

Excluding proprietary solutions, a few competing software tracersexist for Linux.ftraceis the de facto function tracer of the Linux kernel.straceis
able to record all system calls made by a user process.SystemTapis a Linux kernel and user space tracer which uses custom user scriptsto produce plain text traces.sysdigalso
uses scripts, written in Lua, to trace and analyze the Linuxkernel.

The main distinctive features of LTTng is that it produces correlatedkernel and user space traces, as well as doing so with the lowestoverhead amongst other solutions. It produces trace files in theCTFformat,
an optimized file format for production and analyses ofmulti-gigabyte data. LTTng is the result of close to 10 years ofactive development by a community of passionate developers. It iscurrently available on all major desktop, server, and embedded Linuxdistributions.

The main interface for tracing control is a single command line toolnamed
lttng
. The latter can create several tracing sessions,enable/disable events on the fly, filter them efficiently with customuser expressions, start/stop tracing and do much more. Traces can berecorded on disk or sent over the network, kept totally or
partially,and viewed once tracing is inactive or in real-time.

Install LTTng now and start tracing!

Installing LTTng

LTTng is a set of software components which interact to allowinstrumenting the Linux kernel and user applications and controllingtracing sessions (starting/stopping tracing, enabling/disabling events,etc.). Those components are bundled into
the following packages:

LTTng-tools: Libraries and command line interface to controltracing sessions
LTTng-modules: Linux kernel modules allowing Linux to betraced using LTTng
LTTng-UST: User space tracing library

Most distributions mark the LTTng-modules and LTTng-UST packages asoptional. In the following sections, we always provide the steps toinstall all three, but be aware that LTTng-modules is only required ifyou intend to trace the Linux kernel and LTTng-UST
is only required ifyou intend to trace user space applications.

This chapter shows how to install the above packages on a Linuxsystem. The easiest way is to use the package manager of the system'sdistribution (desktop orembedded).
Support is also available forenterprise distributions, such asRed Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES).Otherwise, you canbuild
the LTTng packages from source.

Desktop distributions

Official and unofficial LTTng packages are available for the majorLinux desktop distributions:Ubuntu,Fedora,Debian,openSUSE(and
other RPM-based distributions) andArch Linux.LTTng is regularly tested on those. Should any issue arise whenfollowing the procedures below, please inform thecommunity
about it.

Ubuntu

The following steps apply to Ubuntu ≥ 12.04. Forprevious releases, you will need to build and install LTTngfrom source, as no Ubuntu packages wereavailable before
version 12.04.

Two repository types can provide LTTng packages for Ubuntu: officialrepositories andPPA.

Official repositories
To install LTTng from the official Ubuntu repositories, simply use
apt-get
:

sudo apt-get install lttng-tools
sudo apt-get install lttng-modules-dkms
sudo apt-get install liblttng-ust-dev

PPA
TheLTTng PPAoffers the latest stable versions of LTTng packages. To get packagesfrom the PPA, follow these steps:

sudo apt-add-repository ppa:lttng/ppasudo apt-get updatesudo apt-get install lttng-tools
sudo apt-get install lttng-modules-dkms
sudo apt-get install liblttng-ust-dev

Fedora

Starting from Fedora 17, LTTng-tools and LTTng-UST packages are officiallyavailable using
yum
:

sudo yum install lttng-tools
sudo yum install lttng-ust
sudo yum install lttng-ust-devel

LTTng-modules still needs to be built and installed from source. For that, make sure that the
kernel-devel
package is already installed beforehand:

sudo yum install kernel-devel

Proceed on to fetch
LTTng-modules' source.Build and install it as follows:

KERNELDIR=/usr/src/kernels/$(uname -r) make
sudo make modules_install

Debian

Debian wheezy (stable) and previous versions are not supported; you willneed to build and install LTTng packagesfrom source for those.

Debian jessie (testing) and sid (unstable) have everything you need:

sudo apt-get install lttng-tools
sudo apt-get install lttng-modules-dkms
sudo apt-get install liblttng-ust-dev

openSUSE/RPM

openSUSE has had LTTng packages since version 12.3. To install LTTng, youfirst need to add an entry to your repository configuration. All LTTng repositoriesare availablehere.For
example, the following commands will add the LTTng repository foropenSUSE 13.1:

sudo zypper addrepo http://download.opensuse.org/repositories/devel:/tools:/lttng/openSUSE_13.1/devel:tools:lttng.repo

Then, refresh the package database:

sudo zypper refresh

and install
lttng-tools
,
lttng-modules
and
lttng-ust-devel
:

sudo zypper install lttng-tools
sudo zypper install lttng-modules
sudo zypper install lttng-ust-devel

Arch Linux

LTTng packages are available in theAUR under the following names:
lttng-tools
,
lttng-modules
and
lttng-ust
.

The three LTTng packages can be installed using the followingYaourt commands:

yaourt -S lttng-tools
yaourt -S lttng-modules
yaourt -S lttng-ust

If you're living on the edge, the AUR also contains the latest Git master branchfor each of those packages:
lttng-tools-git
,
lttng-modules-git
and
lttng-ust-git
.

Embedded distributions

Some developers may be interested in tracing the Linux kernel and user spaceapplications running on embedded systems. LTTng is packaged by two popularembedded Linux distributions:Buildroot
andOpenEmbedded/Yocto.

Buildroot

LTTng packages in Buildroot are
lttng-tools
,
lttng-modules
and
lttng-libust
.

To enable them, start the Buildroot configuration menu as usual:

make menuconfig

In:

Kernel: make sure Linux kernel is enabled
Toolchain: make sure the following options are enabled:

Enable large file (files > 2GB) support
Enable WCHAR support

In Target packages/Debugging, profiling and benchmark, enablelttng-modules andlttng-tools. InTarget packages/Libraries/Other, enablelttng-libust.

OpenEmbedded/Yocto

LTTng recipes are available in the
openembedded-core
layer ofOpenEmbedded:

lttng-tools

lttng-modules

lttng-ust


Using BitBake, the simplest way to include LTTng recipes in yourtarget image is to add them to
IMAGE_INSTALL_append
in
conf/local.conf
:

IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"


If you're using Hob, click Edit image recipe once you have selecteda machine and an image recipe. Then, in theAll recipes tab, searchfor
lttng
and you should find and be able to include the three LTTngrecipes.

Enterprise distributions (RHEL,SLES)

To install LTTng on enterprise Linux distributions(such as
RHEL andSLES), please seeEfficiOSEnterprise Packages.

Building from source

As
previously stated, LTTng is shipped asthree packages: LTTng-tools, LTTng-modules and LTTng-UST. LTTng-toolscontains everything needed to control tracing sessions, whileLTTng-modules is only needed for Linux kernel tracing and LTTng-UST isonly needed for
user space tracing.

The tarballs are available in theDownloadsection of the LTTng website.

Please refer to the
README.md
files provided by each package toproperly build and install them.

Tip:The aforementioned
README.md
filesare rendered as rich text whenviewed on GitHub.

If you're using Ubuntu, executing the following Bash scriptwill install the appropriate dependencies, clone the LTTngGit repositories, build the projects, and install them. The sources willbe cloned into
~/src
. Your user needs to be a sudoer
for the installsteps to be completed.

#!/bin/bash

mkdir ~/src
cd ~/src
sudo apt-get update
sudo apt-get -y install build-essential libtool flex bison \
libpopt-dev uuid-dev libglib2.0-dev autoconf \
git libxml2-dev
git clone git://git.lttng.org/lttng-ust.git
git clone git://git.lttng.org/lttng-modules.git
git clone git://git.lttng.org/lttng-tools.git
git clone git://git.lttng.org/userspace-rcu.git
git clone git://git.efficios.com/babeltrace.git

cd userspace-rcu
./bootstrap && ./configure && make -j 4 && sudo make install
sudo ldconfig

cd ../lttng-ust
./bootstrap && ./configure && make -j 4 && sudo make install
sudo ldconfig

cd ../lttng-modules
make && sudo make modules_install
sudo depmod -a

cd ../lttng-tools
./bootstrap && ./configure && make -j 4 && sudo make install
sudo ldconfig
sudo cp extras/lttng-bash_completion /etc/bash_completion.d/lttng

cd ../babeltrace
./bootstrap && ./configure && make -j 4 && sudo make install
sudo ldconfig


Getting started with LTTng

This is a small guide to get started quickly with LTTng kernel and userspace tracing. For intermediate to advanced use cases and a morethorough understanding of LTTng, seeUsing
LTTng andUnderstanding LTTng.

Before reading this guide, make sure LTTngis installed. You will at least needLTTng-tools. Also install LTTng-modules fortracing
the Linux kernel and LTTng-USTfortracing your own user space applications.When your traces are finally written and complete, theViewing
and analyzing your tracessection of this chapter will help you analyze your tracepoint events to investigate.

Tracing the Linux kernel

Make sure LTTng-tools and LTTng-modules packagesare installed.

Since you're about to trace the Linux kernel itself, let's look at theavailable kernel events using the
lttng
tool, which has aGit-like command line structure:

lttng list --kernel

Before tracing, you need to create a session:

sudo lttng create my-session


Tip:You can avoid using
sudo
in the previous and following commands if your user is a member of the
tracing
group
.

my-session
is the tracing session name and could be anything youlike.
auto
will be used if omitted.

Let's now enable some events for this session:

sudo lttng enable-event --kernel sched_switch,sched_process_fork

or you might want to simply enable all available kernel events (bewarethat trace files will grow rapidly when doing this):

sudo lttng enable-event --kernel --all

Start tracing:

sudo lttng start

By default, traces are saved in
~/lttng-traces/name-date-time
,where
name
is the session name.

When you're done tracing:

sudo lttng stop
sudo lttng destroy

Although
destroy
looks scary here, it doesn't actually destroy theoutputted trace files: it only destroys the tracing session.

What's next? Have a look atViewing and analyzing your tracesto view and analyze the trace you just recorded.

Tracing your own user application

The previous section helped you create a trace out of Linux kernel events.This section steps you through a simple example showing you how to traceaHello world program written in C.

Make sure LTTng-tools and LTTng-UST packagesare installed.

Tracing is just like having
printf()
calls at specific locations ofyour source code, albeit LTTng is much more faster and flexible than
printf()
. In the LTTng realm,
tracepoint()
is analogous to
printf()
.

Unlike
printf()
, though,
tracepoint()
does not use a format string toknow the types of its arguments: the formats of all tracepoints must bedefined before using them. So before even writing ourHello world program,we need
to define the format of our tracepoint. This is done by writing atemplate file, with a name usually ending with the
.tp
extension (fortracepoint),which the
lttng-gen-tp
tool (shipped
with LTTng-UST) will use to generatean object file (along with a
.c
file) and a header to be included in our application source code.

Here's the whole flow:



The template file format is a list of tracepoint definitionsand other optional definition entries which we will skip forthis quickstart. Each tracepoint is defined using the
TRACEPOINT_EVENT()
macro. For each tracepoint, you must provide:

a provider name, which is the "scope" of this tracepoint (this usuallyincludes the company and project names)
a tracepoint name
a list of arguments for the eventual
tracepoint()
call, each item being:

the argument C type
the argument name

a list of fields, which will be the actual fields of the recorded eventsfor this tracepoint

Here's a simple tracepoint definition example with two arguments: an integerand a string:

TRACEPOINT_EVENT(
hello_world,
my_first_tracepoint,
TP_ARGS(
int, my_integer_arg,
char*, my_string_arg
),
TP_FIELDS(
ctf_string(my_string_field, my_string_arg)
ctf_integer(int, my_integer_field, my_integer_arg)
)
)


The exact syntax is well explained in theC application instrumenting guide of theUsing
LTTng chapter, as well as in theLTTng-UST manpage.

Save the above snippet as
hello-tp.tp
and run:

lttng-gen-tp hello-tp.tp

The following files will be created next to
hello-tp.tp
:

hello-tp.c

hello-tp.o

hello-tp.h


hello-tp.o
is the compiled object file of
hello-tp.c
.

Now, by including
hello-tp.h
in your own application, you may use thetracepoint defined above by properly refering to it when calling
tracepoint()
:

#include <stdio.h>
#include "hello-tp.h"

int main(int argc, char* argv[])
{
int x;

puts("Hello, World!\nPress Enter to continue...");

/* The following getchar() call is only placed here for the purpose
* of this demonstration, for pausing the application in order for
* you to have time to list its events. It's not needed otherwise.
*/
getchar();

/* A tracepoint() call. Arguments, as defined in hello-tp.tp:
*
*     1st: provider name (always)
*     2nd: tracepoint name (always)
*     3rd: my_integer_arg (first user-defined argument)
*     4th: my_string_arg (second user-defined argument)
*
* Notice the provider and tracepoint names are NOT strings;
* they are in fact parts of variables created by macros in
* hello-tp.h.
*/
tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");

for (x = 0; x < argc; ++x) {
tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
}

puts("Quitting now!");

tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");

return 0;
}


Save this as
hello.c
, next to
hello-tp.tp
.

Notice
hello-tp.h
, the header file generated by
lttng-gen-tp
fromour template file
hello-tp.tp
, is included by
hello.c
.

You are now ready to compile the application with LTTng-UST support:

gcc -o hello hello.c hello-tp.o -llttng-ust -ldl

If you followed theTracing the Linux kernel section, thefollowing steps will look familiar.

First, run the application with a few arguments:

./hello world and beyond

You should see

Hello, World!
Press Enter to continue...


Use the
lttng
tool to list all available user space events:

lttng list --userspace

You should see the
hello_world:my_first_tracepoint
tracepoint listedunder the
./hello
process.

Create a tracing session:

lttng create my-userspace-session

Enable the
hello_world:my_first_tracepoint
tracepoint:

lttng enable-event --userspace hello_world:my_first_tracepoint

Start tracing:

lttng start

Go back to the running
hello
application and press Enter. All
tracepoint()
calls will be executed and the program will finally exit.

Stop tracing:

lttng stop

Done! You may use
lttng view
to list the recorded events. This commandstarts
babeltrace
in the background, if it is installed:

lttng view
should output something like:

[18:10:27.684304496] (+?.?????????) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "hi there!", my_integer_field = 23 }
[18:10:27.684338440] (+0.000033944) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "./hello", my_integer_field = 0 }
[18:10:27.684340692] (+0.000002252) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "world", my_integer_field = 1 }
[18:10:27.684342616] (+0.000001924) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "and", my_integer_field = 2 }
[18:10:27.684343518] (+0.000000902) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "beyond", my_integer_field = 3 }
[18:10:27.684357978] (+0.000014460) hostname hello_world:my_first_tracepoint: { cpu_id = 0 }, { my_string_field = "x^2", my_integer_field = 16 }


When you're done, you may destroy the tracing session, which does notdestroy the generated trace files, leaving them available for furtheranalysis:

lttng destroy my-userspace-session

The next section presents other alternatives to view and analyze yourLTTng traces.

Viewing and analyzing your traces

This section describes how to visualize the data gathered after tracingthe Linux kernel or a user space application.

Many ways exist to read your LTTng traces:

babeltrace
is a command line utility which converts trace formats;it supports the format used by LTTng,CTF, as well as a basictext output which may be
grep
ed. The
babeltrace

command ispart of theBabeltrace project.
Babeltrace also includes a Python binding so that you mayeasily open and read an LTTng trace with your own script, benefitingfrom the power of Python.
Trace Compassis an Eclipse plugin used to visualize and analyze various types oftraces, including LTTng's. It also comes as
a standalone applicationand can be downloaded fromhere.

LTTng trace files are usually recorded in the
~/lttng-traces
directory.Let's now view the trace and perform a basic analysis using
babeltrace
.

The simplest way to list all the recorded events of a trace is to pass itspath to
babeltrace
with no options:

babeltrace ~/lttng-traces/my-session

babeltrace
will find all traces within the given path recursively andoutput all their events, merging them intelligently.

Listing all the system calls of a Linux kernel trace with their arguments iseasy with
babeltrace
and
grep
:

babeltrace ~/lttng-traces/my-kernel-session | grep sys_

Counting events is also straightforward:

babeltrace ~/lttng-traces/my-kernel-session | grep sys_read | wc --lines

The text output of
babeltrace
is useful for isolating events by simplematching using
grep
and similar utilities. However, more elaborate filterssuch as keeping only events with a field value falling within a specific rangeare not
trivial to write using a shell. Moreover, reductions and even themost basic computations involving multiple events are virtually impossibleto implement.

Fortunately, Babeltrace ships with a Python 3 binding which makes itreally easy to read the events of an LTTng trace sequentially and computethe desired information.

Here's a simple example using the Babeltrace Python binding. The followingscript accepts an LTTng Linux kernel trace path as its first argument andoutputs the short names of the top 5 running processes on CPU 0 during thewhole trace:

import sys
from collections import Counter
import babeltrace

def top5proc():
if len(sys.argv) != 2:
msg = 'Usage: python {} TRACEPATH'.format(sys.argv[0])
raise ValueError(msg)

# a trace collection holds one to many traces
col = babeltrace.TraceCollection()

# add the trace provided by the user
# (LTTng traces always have the 'ctf' format)
if col.add_trace(sys.argv[1], 'ctf') is None:
raise RuntimeError('Cannot add trace')

# this counter dict will hold execution times:
#
#   task command name -> total execution time (ns)
exec_times = Counter()

# this holds the last `sched_switch` timestamp
last_ts = None

# iterate events
for event in col.events:
# keep only `sched_switch` events
if event.name != 'sched_switch':
continue

# keep only events which happened on CPU 0
if event['cpu_id'] != 0:
continue

# event timestamp
cur_ts = event.timestamp

if last_ts is None:
# we start here
last_ts = cur_ts

# previous task command (short) name
prev_comm = event['prev_comm']

# initialize entry in our dict if not yet done
if prev_comm not in exec_times:
exec_times[prev_comm] = 0

# compute previous command execution time
diff = cur_ts - last_ts

# update execution time of this command
exec_times[prev_comm] += diff

# update last timestamp
last_ts = cur_ts

# display top 10
for name, ns in exec_times.most_common(5):
s = ns / 1000000000
print('{:20}{} s'.format(name, s))

if __name__ == '__main__':
top5proc()


Save this script as
top5proc.py
and run it with Python 3, providing thepath to an LTTng Linux kernel trace as the first argument:

python3 top5proc.py ~/lttng-sessions/my-session-.../kernel

Make sure the path you provide is the directory containing actual tracefiles (
channel0_0
,
metadata
, etc.): the
babeltrace
utility recursesdirectories, but the Python binding does not.

Here's an example of output:

swapper/0           48.607245889 s
chromium            7.192738188 s
pavucontrol         0.709894415 s
Compositor          0.660867933 s
Xorg.bin            0.616753786 s


Note that
swapper/0
is the "idle" process of CPU 0 on Linux; since weweren't using the CPU that much when tracing, its first position in the listmakes sense.

Understanding LTTng

If you're going to use LTTng in any serious way, it is fundamental thatyou become familiar with its core concepts. Technical terms liketracing sessions,domains,channels and
events are used overand over in theUsing LTTng chapter,and it is assumed that you understand what they mean when reading it.

LTTng, as you already know, is a toolkit. It would be wrongto call it a simpletool since it is composed of multiple interactingcomponents. This chapter also describes the latter, providing detailsabout their respective roles and how they
connect together to formthe current LTTng ecosystem.

Core concepts

This section explains the various elementary concepts a user has to dealwith when using LTTng. They are:

tracing session
domain
channel
event

Tracing session

A tracing session is—like any session—a container ofstate. Anything that is done when tracing using LTTng happens in thescope of a tracing session. In this regard, it is analogous to a bankwebsite's session: you can't interact online with your bank
accountunless you are logged in a session, except for reading a few staticwebpages (LTTng, too, can report some static information that does notneed a created tracing session).

A tracing session holds the following attributes and objects (some ofwhich are described in the following sections):

a name
the tracing state (tracing started or stopped)
the trace data output path/URL (local path or sent over the network)
a mode (normal, snapshot or live)
the snapshot output paths/URLs (if applicable)
for eachdomain, a list ofchannels
for each channel:

a name
the channel state (enabled or disabled)
its parameters (event loss mode, sub-buffers size and count,timer periods, output type, trace files size and count, etc.)
a list of added context information
a list of
events

for each event:

its state (enabled or disabled)
a list of instrumentation points (tracepoints, system calls,dynamic probes, etc.)
associated log levels
a filter expression

All this information is completely isolated between tracing sessions.

Conceptually, a tracing session is a per-user object; thePlumbing section shows how this is actuallyimplemented. Any user may create as many concurrent tracing sessionsas desired.
As you can see in the list above, even the tracing stateis a per-tracing session attribute, so that you may trace your targetsystem/application in a given tracing session with a specificconfiguration while another one stays inactive.

The trace data generated in a tracing session may be either savedto disk, sent over the network or not saved at all (in which casesnapshots may still be saved to disk or sent to a remote machine).

Domain

A tracing domain is the official term the LTTng project uses todesignate a tracer category.

There are currently four known domains:

Linux kernel
user space
java.util.logging
(JUL)
log4j

Different tracers expose common features in their own interfaces, but,from a user's perspective, you still need to target a specific type oftracer to perform some actions. For example, since both kernel and userspace tracers support named tracepoints (probes
manually inserted insource code), you need to specify which one is concerned when enablingan event because both domains could have existing events with the samename.

Some features are not available in all domains. Filtering enabledevents using custom expressions, for example, is currently notsupported in the kernel domain, but support could be added in thefuture.

Channel

A channel is a set of events with specific parameters and potentialadded context information. Channels have unique names per domain withina tracing session. A given event is always registered to at least onechannel; having an enabled event in two
channels will produce a tracewith this event recorded twice everytime it occurs.

Channels may be individually enabled or disabled. Occurring events ofa disabled channel will never make it to recorded events.

The fundamental role of a channel is to keep a shared ring buffer, whereevents are eventually recorded by the tracer and consumed by a consumerdaemon. This internal ring buffer is divided into many sub-buffers ofequal size.

Channels, when created, may be fine-tuned thanks to a few parameters,many of them related to sub-buffers. The following subsections explainwhat those parameters are and in which situations you should manuallyadjust them.

Overwrite and discard event loss modes
As previously mentioned, a channel's ring buffer is divided into manyequally sized sub-buffers.

As events occur, they are serialized as trace data into a specificsub-buffer (yellow arc in the following animation) until it is full:when this happens, the sub-buffer is marked as consumable (red) andanother,empty (white) sub-buffer starts receiving
the followingevents. The marked sub-buffer will be consumed eventually by a consumerdaemon (returns to white).

In an ideal world, sub-buffers are consumed faster than filled, like itis the case above. In the real world, however, all sub-buffers could befull at some point, leaving no space to record the following events. Bydesign, LTTng is anon-blocking tracer:
when no empty sub-bufferexists, losing events is acceptable when the alternative would be tocause substantial delays in the instrumented application's execution.LTTng privileges performance over integrity, aiming at perturbing thetraced system as little as
possible in order to make tracing of subtlerace conditions and rare interrupt cascades possible.

When it comes to losing events because no empty sub-buffer is available,the channel'sevent loss mode determines what to do amongst:

Discard: drop the newest events until a sub-buffer is released.
Overwrite: clear the sub-buffer containing the oldest recordedevents and start recording the newest events there. This mode issometimes calledflight recorder mode because it behaves like aflight recorder: always keep a fixed amount
of the latest data.

Which mechanism you should choose depends on your context: prioritizethe newest or the oldest events in the ring buffer?

Beware that, in overwrite mode, a whole sub-buffer is abandoned as soonas a new event doesn't find an empty sub-buffer, whereas in discardmode, only the event that doesn't fit is discarded.

Also note that a count of lost events will be incremented and saved inthe trace itself when an event is lost in discard mode, whereas noinformation is kept when a sub-buffer gets overwritten before beingcommitted.

There are known ways to decrease your probability of losing events. Thenext section shows how tuning the sub-buffers count and size can beused to virtually stop losing events.

Sub-buffers count and size
For each channel, an LTTng user may set its number of sub-buffers andtheir size.

Note that there is a noticeable tracer's CPU overhead introduced whenswitching sub-buffers (marking a full one as consumable and switchingto an empty one for the following events to be recorded). Knowing this,the following list presents a few practical situations
along with howto configure sub-buffers for them:

High event throughput: in general, prefer bigger sub-buffers tolower the risk of losing events. Having bigger sub-buffers willalso ensure a lower sub-buffer switching frequency. The number ofsub-buffers is only meaningful if the channel
is in overwrite mode:in this case, if a sub-buffer overwrite happens, you will still havethe other sub-buffers left unaltered.
Low event throughput: in general, prefer smaller sub-bufferssince the risk of losing events is already low. Since eventshappen less frequently, the sub-buffer switching frequency shouldremain low and thus the tracer's overhead should not
be a problem.
Low memory system: if your target system has a low memorylimit, prefer fewer first, then smaller sub-buffers. Even if thesystem is limited in memory, you want to keep the sub-buffers asbig as possible to avoid a high sub-buffer switching
frequency.

You should know that LTTng uses CTF as its trace format, which meansevent data is very compact. For example, the average LTTng Linux kernelevent weights about 32 bytes. A sub-buffer size of 1 MiB isthus considered big.

The previous situations highlight the major trade-off between a few bigsub-buffers and more, smaller sub-buffers: sub-buffer switchingfrequency vs. how much data is lost in overwrite mode. Assuming aconstant event throughput and using the overwrite mode,
the twofollowing configurations have the same ring buffer total size:

2 sub-buffers of 4 MiB each lead to a very low sub-bufferswitching frequency, but if a sub-buffer overwrite happens, half ofthe recorded events so far (4 MiB) are definitely lost.
8 sub-buffers of 1 MiB each lead to 4 times the tracer'soverhead as the previous configuration, but if a sub-bufferoverwrite happens, only the eighth of events recorded so far aredefinitely lost.

In discard mode, the sub-buffers count parameter is pointless: use twosub-buffers and set their size according to the requirements of yoursituation.

Switch timer
The switch timer period is another important configurable feature ofchannels to ensure periodic sub-buffer flushing.

When the switch timer fires, a sub-buffer switch happens. This timermay be used to ensure that event data is consumed and committed totrace files periodically in case of a low event throughput:

Switch!
It's also convenient when big sub-buffers are used to cope withsporadic high event throughput, even if the throughput is normallylower.

Buffering schemes
In the user space tracing domain, two buffering schemes areavailable when creating a channel:

Per-PID buffering: keep one ring buffer per process.
Per-UID buffering: keep one ring buffer for all processes ofa single user.

The per-PID buffering scheme will consume more memory than the per-UIDoption if more than one process is instrumented for LTTng-UST. However,per-PID buffering ensures that one process having a high eventthroughput won't fill all the shared sub-buffers, only
its own.

The Linux kernel tracing domain only has one available buffering schemewhich is to use a single ring buffer for the whole system.

Event

An event, in LTTng's realm, is a term often used metonymically,having multiple definitions depending on the context:

When tracing, an event is a point in space-time. Space, in atracing context, is the set of all executable positions of acompiled application by a logical processor. When a program isexecuted by a processor and some instrumentation point, orprobe,
is encountered, an event occurs. This event is accompaniedby some contextual payload (values of specific variables at thispoint of execution) which may or may not be recorded.
In the context of a recorded trace file, the term event impliesa
recorded event.
When configuring a tracing session, enabled events refer tospecific rules which could lead to the transfer of actualoccurring events (1) to recorded events (2).

The whole
Core concepts section focuses on thethird definition. An event is always registered toone or morechannels and may be enabled or disabled at will per channel. A disabledevent will never lead to a recorded event, even if its channelis enabled.

An event (3) is enabled with a few conditions that must all be metwhen an event (1) happens in order to generate a recorded event (2):

A probe or group of probes in the traced application must beexecuted.
Optionally, the probe must have a log level matching alog level range specified when enabling the event.
Optionally, the occurring event must satisfy a customexpression, orfilter, specified when enabling the event.

The following illustration summarizes how tracing sessions, domains,channels and events are related:

This diagram also shows how events may be individually enabled/disabled(green/grey) and how a given event may be registered to more than onechannel.

Plumbing

The previous section described the concepts at the heart of LTTng.This section summarizes LTTng's implementation: how those objects aremanaged by different applications and libraries working together toform the toolkit.

Overview

As
mentioned previously, the whole LTTng suiteis made of the following packages: LTTng-tools, LTTng-UST, andLTTng-modules. Together, they provide different daemons, libraries,kernel modules and command line interfaces. The following tree showswhich usable
component belongs to which package:

LTTng-tools:
session daemon (
lttng-sessiond
)
consumer daemon (
lttng-consumerd
)
relay daemon (
lttng-relayd
)
tracing control library (
liblttng-ctl
)
tracing control command line tool (
lttng
)

LTTng-UST:

user space tracing library (
liblttng-ust
) and its headers
preloadable user space tracing helpers(
liblttng-ust-libc-wrapper
,
liblttng-ust-pthread-wrapper
,
liblttng-ust-cyg-profile
,
liblttng-ust-cyg-profile-fast
and
liblttng-ust-dl
)
user space tracepoint code generator command line tool(
lttng-gen-tp
)
java.util.logging
/log4j tracepoint providers(
liblttng-ust-jul-jni
and
liblttng-ust-log4j-jni
) and JARfile (
liblttng-ust-agent.jar
)

LTTng-modules:

LTTng Linux kernel tracer module
tracing ring buffer kernel modules
many LTTng probe kernel modules

The following diagram shows how the most important LTTng componentsinteract. Plain black arrows represent trace data paths while dashedred arrows indicate control communications. The LTTng relay daemon isshown running on a remote system, although it could
as well run on thetarget (monitored) system.



Each component is described in the following subsections.

Session daemon

At the heart of LTTng's plumbing is the session daemon, often calledby its command name,
lttng-sessiond
.

The session daemon is responsible for managing tracing sessions andwhat they logically contain (channel properties, enabled/disabledevents, etc.). By communicating locally with instrumented applications(using LTTng-UST) and with the LTTng Linux kernel modules(LTTng-modules),
it oversees all tracing activities.

One of the many things that
lttng-sessiond
does is to keeptrack of the available event types. User space applications andlibraries actively connect and register to the session daemon when theystart. By contrast,
lttng-sessiond
seeks
out and loads the appropriateLTTng kernel modules as part of its own initialization. Kernel eventtypes arepulled by
lttng-sessiond
, whereas user space event typesarepushed to it by the various user space tracepoint providers.

Using a specific inter-process communication protocol with Linux kerneland user space tracers, the session daemon can send channel informationso that they are initialized, enable/disable specific probes based onenabled/disabled events by the user, send event
filters information toLTTng tracers so that filtering actually happens at the tracer site,start/stop tracing a specific application or the Linux kernel, etc.

The session daemon is not useful without some user controlling it,because it's only a sophisticated control interchange and thusdoesn't make any decision on its own.
lttng-sessiond
opens a localsocket for controlling it, albeit the preferred
way to control it isusing
liblttng-ctl
, an installed C library hiding the communicationprotocol behind an easy-to-use API. The
lttng
tool makes use of
liblttng-ctl
to implement a user-friendly command line interface.

lttng-sessiond
does not receive any trace data from instrumentedapplications; theconsumer daemons are the programs responsible forcollecting trace data using shared ring buffers. However, the sessiondaemon is the one that must spawn
a consumer daemon and establisha control communication with it.

Session daemons run on a per-user basis. Knowing this, multipleinstances of
lttng-sessiond
may run simultaneously, each belongingto a different user and each operating independently of the others.Only
root
's session daemon, however, may control LTTng kernel modules(i.e. the kernel tracer). With that in mind, if a
user has no rootaccess on the target system, he cannot trace the system's kernel, butshould still be able to trace its own instrumented applications.

It has to be noted that, although only
root
's session daemon maycontrol the kernel tracer, the
lttng-sessiond
command has a
--group
option which may be used to specify the name of a special user groupallowed to communicate
with
root
's session daemon and thus recordkernel traces. By default, this group is named
tracing
.

If not done yet, the
lttng
tool, by default, automatically starts asession daemon.
lttng-sessiond
may also be started manually:

lttng-sessiond

This will start the session daemon in foreground. Use

lttng-sessiond --daemonize

to start it as a true daemon.

To kill the current user's session daemon,
pkill
may be used:

pkill lttng-sessiond

The default
SIGTERM
signal will terminate it cleanly.

Several other options are available and described in
lttng-sessiond
's manpage
or by running
lttng-sessiond --help
.

Consumer daemon

The consumer daemon, or
lttng-consumerd
, is a program sharing somering buffers with user applications or the LTTng kernel modules tocollect trace data and output it at some place (on disk or sent overthe network to an LTTng relay daemon).

Consumer daemons are created by a session daemon as soon as events areenabled within a tracing session, well before tracing is activatedfor the latter. Entirely managed by session daemons,consumer daemons survive session destruction to be reused later,should
a new tracing session be created. Consumer daemons are alwaysowned by the same user as their session daemon. When its owner sessiondaemon is killed, the consumer daemon also exits. This is becausethe consumer daemon is always the child process of a sessiondaemon.Consumer daemons should never be started manually. For this reason,they are not installed in one of the usual locations listed in the
PATH
environment variable.
lttng-sessiond
has, however, abunch
of options tospecify custom consumer daemon paths if, for some reason, a consumerdaemon other than the default installed one is needed.

There are up to two running consumer daemons per user, whereas only onesession daemon may run per user. This is because each process hasindependent bitness: if the target system runs a mixture of 32-bit and64-bit processes, it is more efficient to have separate
corresponding32-bit and 64-bit consumer daemons. The
root
user is an exception: itmay have up tothree running consumer daemons: 32-bit and 64-bitinstances for its user space applications and one more reserved forcollecting kernel tracedata.

As new tracing domains are added to LTTng, the development community'sintent is to minimize the need for additionnal consumer daemon instancesdedicated to them. For instance, the
java.util.logging
(JUL) domainevents are in fact mapped to the
user space domain, thus tracing thisparticular domain is handled by existing user space domain consumerdaemons.

Relay daemon

When a tracing session is configured to send its trace data over thenetwork, an LTTngrelay daemon must be used at the other end toreceive trace packets and serialize them to trace files. This setupmakes it possible to trace a target system without
ever committing tracedata to its local storage, a feature which is useful for embeddedsystems, amongst others. The command implementing the relay daemonis
lttng-relayd
.

The basic use case of
lttng-relayd
is to transfer trace data receivedover the network to trace files on the local file system. The relaydaemon must listen on two TCP ports to achieve this: one control port,used by the target session daemon,
and one data port, used by thetarget consumer daemon. The relay and session daemons agree on commondefault ports when custom ones are not specified.

Since the communication transport protocol for both ports is standardTCP, the relay daemon may be started either remotely or locally (on thetarget system).

While two instances of consumer daemons (32-bit and 64-bit) may runconcurrently for a given user,
lttng-relayd
needs only be of itshost operating system's bitness.

The other important feature of LTTng's relay daemon is the support ofLTTng live. LTTng live is an application protocol to view events asthey arrive. The relay daemon will still record events in trace files,but atee may be created to inspect
incoming events. Using LTTng livelocally thus requires to run a local relay daemon.

Control library and command line interface

The LTTng control library,
liblttng-ctl
, can be used to communicatewith the session daemon using a C API that hides the underlyingprotocol's details.
liblttng-ctl
is part of LTTng-tools.

liblttng-ctl
may be used by including its "master" header:

#include <lttng/lttng.h>


Some objects are referred by name (C string), such as tracing sessions,but most of them require creating a handle first using
lttng_create_handle()
. The best available developer documentation for
liblttng-ctl
is, for the moment, its
installed header files as such.Every function/structure is thoroughly documented.

The
lttng
program is the de facto standard user interface tocontrol LTTng tracing sessions.
lttng
uses
liblttng-ctl
tocommunicate with session daemons behind the scenes.Its
manpage is exhaustive, aswell as its command line help (
lttngcmd --help
,where
cmd
is the command name).

The
Controlling tracing section is a featuretour of the
lttng
tool.

User space tracing library

The user space tracing part of LTTng is possible thanks to the userspace tracing library,
liblttng-ust
, which is part of the LTTng-USTpackage.

liblttng-ust
provides header files containing macros used to definetracepoints and create tracepoint providers, as well as a shared objectthat must be linked to individual applications to connect to andcommunicate with a session daemon and a
consumer daemon as soon as theapplication starts.

The exact mechanism by which an application is registered to thesession daemon is beyond the scope of this documentation. The only thingyou need to know is that, since the library constructor does this jobautomatically, tracepoints may be safely inserted
anywhere in the sourcecode without prior manual initialization of
liblttng-ust
.

The
liblttng-ust
-session daemon collaboration also provides aninteresting feature: user space events may be enabledbeforeapplications actually start. By doing this and starting tracing beforelaunching the instrumented application, you
make sure that even theearliest occurring events can be recorded.

The
C application instrumenting guide of theUsing LTTng chapter focuses on using
liblttng-ust
:instrumenting, building/linking and running a user application.

LTTng kernel modules

The LTTng Linux kernel modules provide everything needed to trace theLinux kernel: various probes, a ring buffer implementation for aconsumer daemon to read trace data and the tracer itself.

Only in exceptional circumstances should you ever need to load theLTTng kernel modules manually: it is normally the responsability of
root
's session daemon to do so. Even if you were to develop yourown LTTng probe module—for tracing a custom
kernel or some kernelmodule (this topic is covered in theLinux kernel instrumenting guide oftheUsing
LTTng chapter)—youshould use the
--extra-kmod-probes
option of the session daemon toappend your probe to the default list. The session and consumer daemonsof regular users do not interact with the LTTng kernel modules at all.

LTTng kernel modules are installed, by default, in
/usr/lib/modules/release/extra
, where
release
is the kernel release(see
uname --kernel-release
).

Using LTTng

Using LTTng involves two main activities: instrumenting andcontrolling tracing.

Instrumenting is the process of inserting probesinto some source code. It can be done manually, by writing tracepointcalls at specific locations in the source
code of the program to trace,or more automatically using dynamic probes (address in assembled code,symbol name, function entry/return, etc.).

It has to be noted that, as an LTTng user, you may not have to worryabout the instrumentation process. Indeed, you may want to trace aprogram already instrumented. As an example, the Linux kernel isthoroughly instrumented, which is why you can trace it without
caringabout adding probes.

Controlling tracing is everythingthat can be done by the LTTng session daemon, which is controlled using
liblttng-ctl
or its command line utility,
lttng
:
creating tracingsessions, listing tracing sessions and events, enabling/disablingevents, starting/stopping the tracers, taking snapshots, etc.

This chapter is a complete user guide of both activities,with common use cases of LTTng exposed throughout the text. It isassumed that you are familiar with LTTng's concepts (events, channels,domains, tracing sessions) and that you understand the roles of
itscomponents (daemons, libraries, command line tools); if not, we inviteyou to read theUnderstanding LTTng chapterbefore you begin reading this one.

If you're new to LTTng, we suggest that you rather start with theGetting started small guide first, then comeback here to broaden your knowledge.

If you're only interested in tracing the Linux kernel with its currentinstrumentation, you may skip theInstrumenting section.

Instrumenting

There are many examples of tracing and monitoring in our everyday life.You have access to real-time and historical weather reports and forecaststhanks to weather stations installed around the country. You know yourpossibly hospitalized friends' and family's
hearts are safe thanks toelectrocardiography. You make sure not to drive your car too fastand have enough fuel to reach your destination thanks to gauges visibleon your dashboard.

All the previous examples have something in common: they rely onprobes. Without electrodes attached to the surface of a body'sskin, cardiac monitoring would be futile.

LTTng, as a tracer, is no different from the real life examples above.If you're about to trace a software system, i.e. record its history ofexecution, you better have probes in the subject you'retracing: the actual software. Various ways were developed todo this.The most straightforward one is to manually place probes, calledtracepoints, in the software's source code. The Linux kernel tracingdomain also allows probes added dynamically.

If you're only interested in tracing the Linux kernel, it may very wellbe that your tracing needs are already appropriately covered by LTTng'sbuilt-in Linux kernel tracepoints and other probes. Or you may be inpossession of a user space application which
has already beeninstrumented. In such cases, the work will reside entirely in the designand execution of tracing sessions, allowing you to jump toControlling tracing
right now.

This section focuses on the following use cases of instrumentation:

C andC++ applications
prebuilt user space tracing helpers
Java application
Linux kernel module or thekernel itself
the
/proc/lttng-logger
ABI

Some
advanced techniques arealso presented at the very end.

C application

Instrumenting a C (or C++) application, be it an executable program ora library, implies using LTTng-UST, theuser space tracing component of LTTng. For C/C++ applications, theLTTng-UST package includes a dynamically loaded library(
liblttng-ust
),
C headers and the
lttng-gen-tp
command line utility.

Since C and C++ are the base languages of virtually all otherprogramming languages(Java virtual machine, Python, Perl, PHP and Node.js interpreters, etc.),implementing user space tracing for an unsupported language is just amatter of using the LTTng-UST
C API at the right places.

The usual work flow to instrument a user space C application withLTTng-UST is:

Define tracepoints (actual probes)
Write tracepoint providers
Insert tracepoints into target source code
Package (build) tracepoint providers
Build user application and link it with tracepoint providers

The steps above are discussed in greater detail in the followingsubsections.

Tracepoint provider
Before jumping into defining tracepoints and insertingthem into the application source code, you must understand what atracepoint provider is.

For the sake of this guide, consider the following two files:

tp.h
:

#undef TRACEPOINT_PROVIDER
#define TRACEPOINT_PROVIDER my_provider

#undef TRACEPOINT_INCLUDE
#define TRACEPOINT_INCLUDE "./tp.h"

#if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
#define _TP_H

#include <lttng/tracepoint.h>

TRACEPOINT_EVENT(
my_provider,
my_first_tracepoint,
TP_ARGS(
int, my_integer_arg,
char*, my_string_arg
),
TP_FIELDS(
ctf_string(my_string_field, my_string_arg)
ctf_integer(int, my_integer_field, my_integer_arg)
)
)

TRACEPOINT_EVENT(
my_provider,
my_other_tracepoint,
TP_ARGS(
int, my_int
),
TP_FIELDS(
ctf_integer(int, some_field, my_int)
)
)

#endif /* _TP_H */

#include <lttng/tracepoint-event.h>


tp.c
:

#define TRACEPOINT_CREATE_PROBES

#include "tp.h"


The two files above are defining a tracepoint provider. A tracepointprovider is some sort of namespace fortracepoint definitions. Tracepointdefinitions are written above with the
TRACEPOINT_EVENT()
macro, and alloweventual
tracepoint()

calls respecting their definitions to be insertedinto the user application's C source code (we explore this in alater section).

Many tracepoint definitions may be part of the same tracepoint providerand many tracepoint providers may coexist in a user space application. Atracepoint provider is packaged either:directly into an existing user application's C source file
as an object file
as a static library
as a shared library

The two files above,
tp.h
and
tp.c
, show a typical template forwriting a tracepoint provider. LTTng-UST was designed so that twotracepoint providers should not be defined in the same header file.

We will now go through the various parts of the above files andgive them a meaning. As you may have noticed, the LTTng-UST API forC/C++ applications is some preprocessor sorcery. The LTTng-UST macrosused in your application and those in the LTTng-UST headers
arecombined to produce actual source code needed to make tracing possibleusing LTTng.

Let's start with the header file,
tp.h
. It begins with

#undef TRACEPOINT_PROVIDER
#define TRACEPOINT_PROVIDER my_provider


TRACEPOINT_PROVIDER
defines the name of the provider to which thefollowing tracepoint definitions will belong. It is used internally byLTTng-UST headers andmust be defined. Since
TRACEPOINT_PROVIDER
could have been defined
by another header file also included by the sameC source file, the best practice is to undefine it first.

Note:Names in LTTng-UST follow the Cidentifier syntax (starting with a letter and containing eitherletters, numbers or underscores); they arenot C strings(not surrounded by double quotes). This is because LTTng-UST
macrosuse those identifier-like strings to create symbols (named types andvariables).

The tracepoint provider is a group of tracepoint definitions; its chosenname should reflect this. A hierarchy like Java packages is recommended,using underscores instead of dots, e.g.,
org_company_project_component
.

Next is
TRACEPOINT_INCLUDE
:

#undef TRACEPOINT_INCLUDE
#define TRACEPOINT_INCLUDE "./tp.h"


This little bit of instrospection is needed by LTTng-UST to includeyour header at various predefined places.

Include guard follows:

#if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
#define _TP_H


Add these precompiler conditionals to ensure the tracepoint eventgeneration can include this file more than once.

The
TRACEPOINT_EVENT()
macro is defined in a LTTng-UST header file whichmust be included:

#include <lttng/tracepoint.h>


This will also allow the application to use the
tracepoint()
macro.

Next is a list of
TRACEPOINT_EVENT()
macro calls which create theactual tracepoint definitions. We will skip this for the moment andcome back to how to use
TRACEPOINT_EVENT()
in
a later section. Just pay attention tothe first argument: it's always the name of the tracepoint providerbeing defined in this header file.

End of include guard:

#endif /* _TP_H */


Finally, include
<lttng/tracepoint-event.h>
to expand the macros:

#include <lttng/tracepoint-event.h>


That's it for
tp.h
. Of course, this is only a header file; it must beincluded in some C source file to actually use it. This is the job of
tp.c
:

#define TRACEPOINT_CREATE_PROBES

#include "tp.h"


When
TRACEPOINT_CREATE_PROBES
is defined, the macros used in
tp.h
,which is included just after, will actually create the source code forLTTng-UST probes (global data structures and functions) out of yourtracepoint definitions. How exactly this is done is out of this text's scope.
TRACEPOINT_CREATE_PROBES

is discussed furtherin
Building/linking tracepoint providers and the user application.

You could include other header files like
tp.h
here to create the probesof different tracepoint providers, e.g.:

#define TRACEPOINT_CREATE_PROBES

#include "tp1.h"
#include "tp2.h"


The rule is: probes of a given tracepoint providermust be created in exactly one source file. This source file could be oneof your project's; it doesn't have to be on its own like
tp.c
, althougha
later sectionshows that doing so allows packaging the tracepoint providersindependently and keep them out of your application, also making itpossible to reuse them between projects.

The following sections explain how to define tracepoints, how to use the
tracepoint()
macro to instrument your user space C application and howto build/link tracepoint providers and your application with LTTng-USTsupport.

Using
lttng-gen-tp

LTTng-UST ships with
lttng-gen-tp
, a handy command line utility forgenerating most of the stuff discussed above. It takes atemplate file,with a name usually ending with the
.tp
extension, containing onlytracepoint definitions,
and outputs a tracepoint provider (either a Csource file or a precompiled object file) with its header file.

lttng-gen-tp
should suffice instatic linkingsituations. When using it, write a template file containing a list of
TRACEPOINT_EVENT()
macro calls. The tool will find the provider namesused and generate the appropriate files which are going to look a lotlike
tp.h

and
tp.c
above.

Just call
lttng-gen-tp
like this:

lttng-gen-tp my-template.tp

my-template.c
,
my-template.o
and
my-template.h
will be createdin the same directory.

You may specify custom C flags passed to the compiler invoked by
lttng-gen-tp
using the
CFLAGS
environment variable:

CFLAGS=-I/custom/include/path lttng-gen-tp my-template.tp

For more information on
lttng-gen-tp
, seeits manpage.

Defining tracepoints
As written in
Tracepoint provider,tracepoints are defined using the
TRACEPOINT_EVENT()
macro. Each tracepoint, when called using the
tracepoint()
macro in the actual application's source code, generatesa specific event type with its own fields.

Let's have another look at the example above, with a few added comments:

TRACEPOINT_EVENT(
/* tracepoint provider name */
my_provider,

/* tracepoint/event name */
my_first_tracepoint,

/* list of tracepoint arguments */
TP_ARGS(
int, my_integer_arg,
char*, my_string_arg
),

/* list of fields of eventual event  */
TP_FIELDS(
ctf_string(my_string_field, my_string_arg)
ctf_integer(int, my_integer_field, my_integer_arg)
)
)


The tracepoint provider name must match the name of the tracepointprovider in which this tracepoint is defined(seeTracepoint provider). In other words,always use
the same string as the value of
TRACEPOINT_PROVIDER
above.

The tracepoint name will become the event name once events are recordedby the LTTng-UST tracer. It must follow the tracepoint provider namesyntax: start with a letter and contain either letters, numbers orunderscores. Two tracepoints under the same provider
cannot have thesame name, i.e. you cannot overload a tracepoint like you wouldoverload functions and methods in C++/Java.

Note:The concatenation of the tracepointprovider name and the tracepoint name cannot exceed 254 characters. Ifit does, the instrumented application will compile and run, but LTTngwill issue multiple warnings and you could experienceserious problems.

The list of tracepoint arguments gives this tracepoint its signature:see it like the declaration of a C function. The format of
TP_ARGS()
arguments is: C type, then argument name; repeat as needed, up to tentimes. For example, if we were to replicate
the signature of C standardlibrary's
fseek()
, the
TP_ARGS()
part would look like:

TP_ARGS(
FILE*, stream,
long int, offset,
int, origin
),


Of course, you will need to include appropriate header files beforethe
TRACEPOINT_EVENT()
macro calls if any argument has a complex type.

TP_ARGS()
may not be omitted, but may be empty.
TP_ARGS(void)
isalso accepted.

The list of fields is where the fun really begins. The fields definedin this list will be the fields of the events generated by the executionof this tracepoint. Each tracepoint field definition has a Cargument expression which will be evaluated
when the execution reachesthe tracepoint. Tracepoint arguments may be used freely in thoseargument expressions, but theydon't have to.

There are several types of tracepoint fields available. The macros todefine them are given and explained in theLTTng-UST library reference section.

Field names must follow the standard C identifier syntax: letter, thenoptional sequence of letters, numbers or underscores. Each field must havea different name.

Those
ctf_*()
macros are added to the
TP_FIELDS()
part of
TRACEPOINT_EVENT()
. Note that they are not delimited by commas.
TP_FIELDS()
may be empty, but the
TP_FIELDS(void)
form isnotaccepted.

The following snippet shows how argument expressions may be used intracepoint fields and how they may refer freely to tracepoint arguments.

/* for struct stat */
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>

TRACEPOINT_EVENT(
my_provider,
my_tracepoint,
TP_ARGS(
int, my_int_arg,
char*, my_str_arg,
struct stat*, st
),
TP_FIELDS(
/* simple integer field with constant value */
ctf_integer(
int,                    /* field C type */
my_constant_field,      /* field name */
23 + 17                 /* argument expression */
)

/* my_int_arg tracepoint argument */
ctf_integer(
int,
my_int_arg_field,
my_int_arg
)

/* my_int_arg squared */
ctf_integer(
int,
my_int_arg_field2,
my_int_arg * my_int_arg
)

/* sum of first 4 characters of my_str_arg */
ctf_integer(
int,
sum4,
my_str_arg[0] + my_str_arg[1] +
my_str_arg[2] + my_str_arg[3]
)

/* my_str_arg as string field */
ctf_string(
my_str_arg_field,       /* field name */
my_str_arg              /* argument expression */
)

/* st_size member of st tracepoint argument, hexadecimal */
ctf_integer_hex(
off_t,                  /* field C type */
size_field,             /* field name */
st->st_size             /* argument expression */
)

/* st_size member of st tracepoint argument, as double */
ctf_float(
double,                 /* field C type */
size_dbl_field,         /* field name */
(double) st->st_size    /* argument expression */
)

/* half of my_str_arg string as text sequence */
ctf_sequence_text(
char,                   /* element C type */
half_my_str_arg_field,  /* field name */
my_str_arg,             /* argument expression */
size_t,                 /* length expression C type */
strlen(my_str_arg) / 2  /* length expression */
)
)
)


As you can see, having a custom argument expression for each fieldmakes tracepoints very flexible for tracing a user space C application.This tracepoint definition is reused later in this guide, whenactually using tracepoints in a user space application.

Using tracepoint classes
In LTTng-UST, a tracepoint class is a class of tracepoints sharing thesame field types and names. Atracepoint instance is one instance ofsuch a declared tracepoint class, with its own event name and tracepointprovider name.

What is documented in
Defining tracepointsis actually how to declare a tracepoint class and define atracepoint instance at the same time. Without revealing the internalsof LTTng-UST too much, it has to be noted that one serializationfunction is created for
each tracepoint class. A serializationfunction is responsible for serializing the fields of a tracepointinto a sub-buffer when tracing. For various performance reasons, whenyour situation requires multiple tracepoints with different names, butwith the same
fields layout, the best practice is to manually createa tracepoint class and instantiate as many tracepoint instances asneeded. One positive effect of such a design, amongst other advantages,is that all tracepoint instances of the same tracepoint class willreuse
the same serialization function, thus reducing cache pollution.

As an example, here are three tracepoint definitions as we know them:

TRACEPOINT_EVENT(
my_app,
get_account,
TP_ARGS(
int, userid,
size_t, len
),
TP_FIELDS(
ctf_integer(int, userid, userid)
ctf_integer(size_t, len, len)
)
)

TRACEPOINT_EVENT(
my_app,
get_settings,
TP_ARGS(
int, userid,
size_t, len
),
TP_FIELDS(
ctf_integer(int, userid, userid)
ctf_integer(size_t, len, len)
)
)

TRACEPOINT_EVENT(
my_app,
get_transaction,
TP_ARGS(
int, userid,
size_t, len
),
TP_FIELDS(
ctf_integer(int, userid, userid)
ctf_integer(size_t, len, len)
)
)


In this case, three tracepoint classes are created, with one tracepointinstance for each of them:
get_account
,
get_settings
and
get_transaction
. However, they all share the same field names andtypes. Declaring one tracepoint
class and three tracepoint instances ofthe latter is a better design choice:

/* the tracepoint class */
TRACEPOINT_EVENT_CLASS(
/* tracepoint provider name */
my_app,

/* tracepoint class name */
my_class,

/* arguments */
TP_ARGS(
int, userid,
size_t, len
),

/* fields */
TP_FIELDS(
ctf_integer(int, userid, userid)
ctf_integer(size_t, len, len)
)
)

/* the tracepoint instances */
TRACEPOINT_EVENT_INSTANCE(
/* tracepoint provider name */
my_app,

/* tracepoint class name */
my_class,

/* tracepoint/event name */
get_account,

/* arguments */
TP_ARGS(
int, userid,
size_t, len
)
)
TRACEPOINT_EVENT_INSTANCE(
my_app,
my_class,
get_settings,
TP_ARGS(
int, userid,
size_t, len
)
)
TRACEPOINT_EVENT_INSTANCE(
my_app,
my_class,
get_transaction,
TP_ARGS(
int, userid,
size_t, len
)
)


Of course, all those names and
TP_ARGS()
invocations are redundant,but some C preprocessor magic can solve this:

#define MY_TRACEPOINT_ARGS \
TP_ARGS( \
int, userid, \
size_t, len \
)

TRACEPOINT_EVENT_CLASS(
my_app,
my_class,
MY_TRACEPOINT_ARGS,
TP_FIELDS(
ctf_integer(int, userid, userid)
ctf_integer(size_t, len, len)
)
)

#define MY_APP_TRACEPOINT_INSTANCE(name) \
TRACEPOINT_EVENT_INSTANCE( \
my_app, \
my_class, \
name, \
MY_TRACEPOINT_ARGS \
)

MY_APP_TRACEPOINT_INSTANCE(get_account)
MY_APP_TRACEPOINT_INSTANCE(get_settings)
MY_APP_TRACEPOINT_INSTANCE(get_transaction)


Assigning log levels to tracepoints
Optionally, a log level can be assigned to a defined tracepoint.Assigning different levels of importance to tracepoints can be useful;when controlling tracing sessions,you
can choose to only enable tracepointsfalling into a specific log level range.

Log levels are assigned to defined tracepoints using the
TRACEPOINT_LOGLEVEL()
macro. The latter must be usedafter havingused
TRACEPOINT_EVENT()
for a given tracepoint. The
TRACEPOINT_LOGLEVEL()
macro has the
following construct:

TRACEPOINT_LOGLEVEL(<provider name>, <tracepoint name>, <log level>)


where the first two arguments are the same as the first two argumentsof
TRACEPOINT_EVENT()
and
<log level>
is oneof the values given in theLTTng-UST library referencesection.

As an example, let's assign a
TRACE_DEBUG_UNIT
log level to ourprevious tracepoint definition:

TRACEPOINT_LOGLEVEL(my_provider, my_tracepoint, TRACE_DEBUG_UNIT)


Probing the application's source code
Once tracepoints are properly defined within a tracepoint provider,they may be inserted into the user application to be instrumentedusing the
tracepoint()
macro. Its first argument is the tracepointprovider name and its second is the tracepoint
name. The next, optionalarguments are defined by the
TP_ARGS()
part of the definition ofthe tracepoint to use.

As an example, let us again take the following tracepoint definition:

TRACEPOINT_EVENT(
/* tracepoint provider name */
my_provider,

/* tracepoint/event name */
my_first_tracepoint,

/* list of tracepoint arguments */
TP_ARGS(
int, my_integer_arg,
char*, my_string_arg
),

/* list of fields of eventual event  */
TP_FIELDS(
ctf_string(my_string_field, my_string_arg)
ctf_integer(int, my_integer_field, my_integer_arg)
)
)


Assuming this is part of a file named
tp.h
which defines the tracepointprovider and which is included by
tp.c
, here's a complete C applicationcalling this tracepoint (multiple times):

#define TRACEPOINT_DEFINE
#include "tp.h"

int main(int argc, char* argv[])
{
int i;

tracepoint(my_provider, my_first_tracepoint, 23, "Hello, World!");

for (i = 0; i < argc; ++i) {
tracepoint(my_provider, my_first_tracepoint, i, argv[i]);
}

return 0;
}


For each tracepoint provider,
TRACEPOINT_DEFINE
must be defined intoexactly one translation unit (C source file) of the user application,before including the tracepoint provider header file. In other words,for a given tracepoint provider, you
cannot define
TRACEPOINT_DEFINE
,and then include its header file in two separate C source files ofthe same application.
TRACEPOINT_DEFINE
is discussed further inBuilding/linking
tracepoint providers and the user application.

As another example, remember this definition we wrote in a previoussection (comments are stripped):

/* for struct stat */
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>

TRACEPOINT_EVENT(
my_provider,
my_tracepoint,
TP_ARGS(
int, my_int_arg,
char*, my_str_arg,
struct stat*, st
),
TP_FIELDS(
ctf_integer(int, my_constant_field, 23 + 17)
ctf_integer(int, my_int_arg_field, my_int_arg)
ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
my_str_arg[2] + my_str_arg[3])
ctf_string(my_str_arg_field, my_str_arg)
ctf_integer_hex(off_t, size_field, st->st_size)
ctf_float(double, size_dbl_field, (double) st->st_size)
ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
size_t, strlen(my_str_arg) / 2)
)
)


Here's an example of calling it:

#define TRACEPOINT_DEFINE
#include "tp.h"

int main(void)
{
struct stat s;

stat("/etc/fstab", &s);

tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);

return 0;
}


When viewing the trace, assuming the file size of
/etc/fstab
is301 bytes, the event generated by the execution of this tracepointshould have the following fields, in this order:

my_constant_field           40
my_int_arg_field            23
my_int_arg_field2           529
sum4_field                  389
my_str_arg_field            "Hello, World!"
size_field                  0x12d
size_dbl_field              301.0
half_my_str_arg_field       "Hello,"


Building/linking tracepoint providers and the user application
This section explains the final step of using LTTng-UST for tracinga user space C application (beside running the application): building andlinking tracepoint providers and the application itself.

As discussed above, the macros used by the user-written tracepoint providerheader file are useless until actually used to create probes code(global data structures and functions) in a translation unit (C source file).This is accomplished by defining
TRACEPOINT_CREATE_PROBES

in a translationunit and then including the tracepoint provider header file.When
TRACEPOINT_CREATE_PROBES
is defined, macros used and included bythe tracepoint provider header will output actual source code needed by anyapplication using the defined
tracepoints. Defining
TRACEPOINT_CREATE_PROBES
produces code used when registeringtracepoint providers when the tracepoint provider package loads.

The other important definition is
TRACEPOINT_DEFINE
. This one createsglobal, per-tracepoint structures referencing the tracepoint providersdata. Those structures are required by the actual functions insertedwhere
tracepoint()
macros
are placed and need to be defined by theinstrumented application.

Both
TRACEPOINT_CREATE_PROBES
and
TRACEPOINT_DEFINE
need to be definedat some places in order to trace a user space C application using LTTng.Although explaining their exact mechanism is beyond the scope of thisdocument, the reason
they both exist separately is to allow the traceproviders to be packaged as a shared object (dynamically loaded library).

There are two ways to compile and link the tracepoint providerswith the application:statically ordynamically.
Both methods are covered in thefollowing subsections.

Static linking
With the static linking method, compiled tracepoint providers are copiedinto the target application. There are three ways to do this:

Use one of your existing C source files to create probes.
Create probes in a separate C source file and build it as anobject file to be linked with the application (more decoupled).
Create probes in a separate C source file, build it as anobject file and archive it to create astatic library(more decoupled, more portable).

The first approach is to define
TRACEPOINT_CREATE_PROBES
and includeyour tracepoint provider(s) header file(s) directly into an existing Csource file. Here's an example:

#include <stdlib.h>
#include <stdio.h>
/* ... */

#define TRACEPOINT_CREATE_PROBES
#define TRACEPOINT_DEFINE
#include "tp.h"

/* ... */

int my_func(int a, const char* b)
{
/* ... */

tracepoint(my_provider, my_tracepoint, buf, sz, limit, &tt)

/* ... */
}

/* ... */


Again, before including a given tracepoint provider header file,
TRACEPOINT_CREATE_PROBES
and
TRACEPOINT_DEFINE
must be defined inone,and only one, translation unit. Other C source files of thesame application may
include
tp.h
to use tracepoints withthe
tracepoint()
macro, but must not define
TRACEPOINT_CREATE_PROBES
/
TRACEPOINT_DEFINE
again.

This translation unit may be built as an object file by making sure toadd
.
to the include path:

gcc -c -I. file.c

The second approach is to isolate the tracepoint provider code into aseparate object file by using a dedicated C source file to create probes:

#define TRACEPOINT_CREATE_PROBES

#include "tp.h"


TRACEPOINT_DEFINE
must be defined by a translation unit of theapplication. Since we're talking about static linking here, it could aswell be defined directly in the file above, before
#include "tp.h"
:

#define TRACEPOINT_CREATE_PROBES
#define TRACEPOINT_DEFINE

#include "tp.h"


This is actually what
lttng-gen-tp
does, and isthe recommended practice.

Build the tracepoint provider:

gcc -c -I. tp.c

Finally, the resulting object file may be archived to create amore portable tracepoint provider static library:

ar rc tp.a tp.o

Using a static library does have the advantage of centralising thetracepoint providers objects so they can be shared between multipleapplications. This way, when the tracepoint provider is modified, thesource code changes don't have to be patched into each
application's sourcecode tree. The applications need to be relinked after each change, but neednot to be otherwise recompiled (unless the tracepoint provider's APIchanges).

Regardless of which method you choose, you end up with an object file(potentially archived) containing the trace providers assembled code.To link this code with the rest of your application, you must also linkwith
liblttng-ust
and
libdl
:

gcc -o app tp.o other.o files.o of.o your.o app.o -llttng-ust -ldl

or

gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -ldl

If you're using a BSDsystem, replace
-ldl
with
-lc
:

gcc -o app tp.a other.o files.o of.o your.o app.o -llttng-ust -lc

The application can be started as usual, e.g.:

./app

The
lttng
command line tool can be used tocontrol tracing.

Dynamic linking
The second approach to package the tracepoint providers is to usedynamic linking: the library and its member functions are explicitlysought, loaded and unloaded at runtime using
libdl
.

It has to be noted that, for a variety of reasons, the created sharedlibrary will be dynamicallyloaded, as opposed to dynamicallylinked. The tracepoint provider shared object is, however, linkedwith
liblttng-ust
, so that
liblttng-ust

is guaranteed to be loadedas soon as the tracepoint provider is. If the tracepoint provider isnot loaded, since the application itself is not linked with
liblttng-ust
, the latter is not loaded at all and the tracepoint callsbecome inert.

The process to create the tracepoint provider shared object is prettymuch the same as the static library method, except that:
since the tracepoint provider is not part of the applicationanymore,
TRACEPOINT_DEFINE
must be defined, for each tracepointprovider, in exactly one translation unit (C source file) of theapplication;
TRACEPOINT_PROBE_DYNAMIC_LINKAGE
must be defined next to
TRACEPOINT_DEFINE
.

Regarding
TRACEPOINT_DEFINE
and
TRACEPOINT_PROBE_DYNAMIC_LINKAGE
,the recommended practice is to use a separate C source file in yourapplication to define them, and then include the tracepoint providerheader files afterwards, e.g.:

#define TRACEPOINT_DEFINE
#define TRACEPOINT_PROBE_DYNAMIC_LINKAGE

/* include the header files of one or more tracepoint providers below */
#include "tp1.h"
#include "tp2.h"
#include "tp3.h"


TRACEPOINT_PROBE_DYNAMIC_LINKAGE
makes the macros included afterwards(by including the tracepoint provider header, which itself includesLTTng-UST headers) aware that the tracepoint provider is to be loadeddynamically and not part of the application's
executable.

The tracepoint provider object file used to create the shared libraryis built like it is using the static library method, only with the
-fpic
option added:

gcc -c -fpic -I. tp.c

It is then linked as a shared library like this:

gcc -shared -Wl,--no-as-needed -o tp.so -llttng-ust tp.o

As previously stated, this tracepoint provider shared object isn'tlinked with the user application: it will be loaded manually. This iswhy the application is built with no mention of this tracepointprovider, but still needs
libdl
:

gcc -o app other.o files.o of.o your.o app.o -ldl

Now, to make LTTng-UST tracing available to the application, the
LD_PRELOAD
environment variable is used to preload the tracepointprovider shared librarybefore the application actually starts:

LD_PRELOAD=/path/to/tp.so ./app


Note:It is not safe to use
dlclose()
on a tracepoint provider shared object that is being actively used for tracing, due to a lack of reference counting from LTTng-UST to the shared object.

For example, statically linking a tracepoint provider to a shared object which is to be dynamically loaded by an application (e.g., a plugin) is not safe: the shared object, which contains the tracepoint provider, could be dynamically closed (
dlclose()
)
at any time by the application.

To instrument a shared object, either:

Statically link the tracepoint provider to the application, or
Build the tracepoint provider as a shared object (following the procedure shown in this section), and preload it when tracing is needed using the
LD_PRELOAD
environment variable.

Your application will still work without this preloading, albeit withoutLTTng-UST tracing support:

./app

Using LTTng-UST with daemons
Some extra care is needed when using
liblttng-ust
with daemonapplications that call
fork()
,
clone()
or BSD's
rfork()
withouta following
exec()
family system call. The
liblttng-ust-fork
library must be preloaded for the application.

Example:

LD_PRELOAD=liblttng-ust-fork.so ./app

Or, if you're using a tracepoint provider shared library:

LD_PRELOAD="liblttng-ust-fork.so /path/to/tp.so" ./app

pkg-config
On some distributions, LTTng-UST is shipped with a pkg-config metadatafile, so that you may use the
pkg-config
tool:

pkg-config --libs lttng-ust

This will return
-llttng-ust -ldl
on Linux systems.

You may also check the LTTng-UST version using
pkg-config
:

pkg-config --modversion lttng-ust

For more information about pkg-config, seeits manpage.

Using
tracef()

tracef()
is a small LTTng-UST API to avoid defining your owntracepoints and tracepoint providers. The signature of
tracef()
isthe same as
printf()
's.

The
tracef()
utility function was developed to make user space tracingsuper simple, albeit with notable disadvantages compared to custom,full-fledged tracepoint providers:

All generated events have the same provider/event names, respectively
lttng-ust-tracef
and
event
.
There's no static type checking.
The only event field you actually get, named
msg
, is a stringpotentially containing the values you passed to the functionusing your own format. This also means that you cannot use filteringusing a custom expression at runtime because there
are no isolatedfields.
Since
tracef()
uses C standard library's
vasprintf()
functionin the background to format the strings at runtime, itsexpected performance is lower than using custom tracepoint providerswith typed fields, which do not require a conversion
to a string.

Thus,
tracef()
is useful for quick prototyping and debugging, butshould not be considered for any permanent/serious applicationinstrumentation.

To use
tracef()
, first include
<lttng/tracef.h>
in the C source filewhere you need to insert probes:

#include <lttng/tracef.h>


Use
tracef()
like you would use
printf()
in your source code, e.g.:

/* ... */

tracef("my message, my integer: %d", my_integer);

/* ... */


Link your application with
liblttng-ust
:

gcc -o app app.c -llttng-ust

Execute the application as usual:

./app

Voilà! Use the
lttng
command line tool tocontrol tracing.

LTTng-UST environment variables and special compilation flags
A few special environment variables and compile flags may affect thebehavior of LTTng-UST.

LTTng-UST's debugging can be activated by setting the environmentvariable
LTTNG_UST_DEBUG
to
1
when launching the application. Itcan also be enabled at compile time by defining
LTTNG_UST_DEBUG
whencompiling LTTng-UST (using the
-DLTTNG_UST_DEBUG
compiler option).

The environment variable
LTTNG_UST_REGISTER_TIMEOUT
can be used tospecify how long the application should wait for thesession daemon'sregistration done
commandbefore proceeding to execute the main program. The timeout value isspecified in milliseconds. 0 meansdon't wait. -1 meanswait forever. Setting this environment variable to 0 is recommendedfor applications with time contraints on the
process startup time.

The default value of
LTTNG_UST_REGISTER_TIMEOUT
(when not defined)is3000 ms.

The compilation definition
LTTNG_UST_DEBUG_VALGRIND
should be enabledat build time (
-DLTTNG_UST_DEBUG_VALGRIND
) to allow
liblttng-ust
to be used withValgrind.Theside effect of defining
LTTNG_UST_DEBUG_VALGRIND
is that per-CPUbuffering is disabled.

C++ application

Because of C++'s cross-compatibility with the C language, C++applications can be readily instrumented with the LTTng-UST C API.

Follow the
C application user guide above. Itshould be noted that, in this case, tracepoint providers should havethe typical
.cpp
,
.cxx
or
.cc
extension and be built with
g++
instead of
gcc
. This is the easiest way of avoiding linking errorsdue to symbol name mangling incompatibilities between both languages.

Prebuilt user space tracing helpers

The LTTng-UST package provides a few helpers that one may finduseful in some situations. They all work the same way: you mustpreload the appropriate shared object before running the userapplication (using the
LD_PRELOAD
environment variable).

The shared objects are normally found in
/usr/lib
.

The current installed helpers are:

liblttng-ust-libc-wrapper.so
and
liblttng-ust-pthread-wrapper.so
:C standard library and POSIX threads
tracing
liblttng-ust-cyg-profile.so
and
liblttng-ust-cyg-profile-fast.so
:function tracing
liblttng-ust-dl.so
:dynamic linker tracing

The following subsections document what helpers instrument exactlyand how to use them.

C standard library and POSIX threads tracing
liblttng-ust-libc-wrapper.so
and
liblttng-ust-pthread-wrapper.so
can add instrumentation to respectively some C standard library andPOSIX threads functions.

The following functions are traceable by
liblttng-ust-libc-wrapper.so
:

TP provider nameTP nameInstrumented function
ust_libc
malloc
malloc()
calloc
calloc()
realloc
realloc()
free
free()
memalign
memalign()
posix_memalign
posix_memalign()
The following functions are traceable by
liblttng-ust-pthread-wrapper.so
:

TP provider nameTP nameInstrumented function
ust_pthread
pthread_mutex_lock_req
pthread_mutex_lock()
(request time)
pthread_mutex_lock_acq
pthread_mutex_lock()
(acquire time)
pthread_mutex_trylock
pthread_mutex_trylock()
pthread_mutex_unlock
pthread_mutex_unlock()
All tracepoints have fields corresponding to the arguments of thefunction they instrument.

To use one or the other with any user application, independently ofhow the latter is built, do:

LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app

or

LD_PRELOAD=liblttng-ust-pthread-wrapper.so my-app

To use both, do:

LD_PRELOAD="liblttng-ust-libc-wrapper.so liblttng-ust-pthread-wrapper.so" my-app

When the shared object is preloaded, it effectively replaces thefunctions listed in the above tables by wrappers which add tracepointsand call the replaced functions.

Of course, like any other tracepoint, the ones above need to be enabledin order for LTTng-UST to generate events. This is done using the
lttng
command line tool(seeControlling
tracing).

Function tracing
Function tracing is the recording of which functions are entered andleft during the execution of an application. Like with any LTTng event,the precise time at which this happens is also kept.

GCC and clang have an option named
-finstrument-functions
which generates instrumentation calls for entry and exit to
functions.The LTTng-UST function tracing helpers,
liblttng-ust-cyg-profile.so
and
liblttng-ust-cyg-profile-fast.so
, take advantage of this featureto add instrumentation to the two generated functions (which contain
cyg_profile

in their names, hence the shared object's name).

In order to use LTTng-UST function tracing, the translation units toinstrument must be built using the
-finstrument-functions
compilerflag.

LTTng-UST function tracing comes in two flavors, each providingdifferent trade-offs:
liblttng-ust-cyg-profile-fast.so
and
liblttng-ust-cyg-profile.so
.

liblttng-ust-cyg-profile-fast.so
is a lightweight variant thatshould only be used where it can beguaranteed that the complete eventstream is recorded without any missing events. Any kind of duplicateinformation is left
out. This version registers the followingtracepoints:

TP provider nameTP nameDescription/fields
lttng_ust_cyg_profile_fast
func_entry
Function entry

addr
address of the called function

func_exit
Function exit

Assuming no event is lost, having only the function addresses on entryis enough for creating a call graph (remember that a recorded eventalways contains the ID of the CPU that generated it). A tool like
addr2line
may
be used to convert function addresses back to source files namesand line numbers.

The other helper,
liblttng-ust-cyg-profile.so
,is a more robust variant which also works for use cases whereevents might get discarded or not recorded from application startup.In these cases, the trace analyzer needs extra information
to beable to reconstruct the program flow. This version registers thefollowing tracepoints:

TP provider nameTP nameDescription/fields
lttng_ust_cyg_profile
func_entry
Function entry

addr
address of the called function
call_site
call site address

func_exit
Function exit

addr
address of the called function
call_site
call site address

To use one or the other variant with any user application, assuming atleast one translation unit of the latter is compiled with the
-finstrument-functions
option, do:

LD_PRELOAD=liblttng-ust-cyg-profile-fast.so my-app

or

LD_PRELOAD=liblttng-ust-cyg-profile.so my-app

It might be necessary to limit the number of source files where
-finstrument-functions
is used to prevent excessive amount of tracedata to be generated at runtime.

Tip: When using GCC, at least, you may use the
-finstrument-functions-exclude-function-list
option to avoid instrumenting entries and exits of specific symbol names.

All events generated from LTTng-UST function tracing are provided onlog level
TRACE_DEBUG_FUNCTION
, which is useful to easily enablefunction tracing events in your tracing session using the
--loglevel-only
option of
lttng enable-event
(seeControlling
tracing).

Dynamic linker tracing
This LTTng-UST helper causes all calls to
dlopen()
and
dlclose()
in the target application to be traced with LTTng.

The helper's shared object,
liblttng-ust-dl.so
, registers thefollowing tracepoints when preloaded:

TP provider nameTP nameDescription/fields
ust_baddr
push
dlopen()
call

baddr
memory base address (where the dynamic linker placed the shared object)
sopath
file system path to the loaded shared object
size
file size of the the loaded shared object
mtime
last modification time (seconds since Epoch time) of the loaded shared object

pop
dlclose()
call

baddr
memory base address

To use this LTTng-UST helper with any user application, independently ofhow the latter is built, do:

LD_PRELOAD=liblttng-ust-dl.so my-app

Of course, like any other tracepoint, the ones above need to be enabledin order for LTTng-UST to generate events. This is done using the
lttng
command line tool(seeControlling
tracing).

Java application

LTTng-UST provides a logging back-end for Java applications usingeither
java.util.logging
(JUL), orApache
log4j 1.2.This back-end is called the LTTng-UST Java agent, and is responsiblefor communications with an LTTng session daemon.

Note:The latest stable version of LTTng does not support Log4j 2.

From the user's point of view, once the LTTng-UST Java agent has beeninitialized, JUL and log4j loggers may be created and used as usual.The agent adds its own handler to theroot logger, so that allloggers may generate LTTng events with no effort.

Common JUL/log4j features are supported using the
lttng
tool(see
Controlling tracing):

listing all logger names
enabling/disabling events per logger name
JUL/log4j log levels

Here's an example using
java.util.logging
:

import java.util.logging.Logger;
import org.lttng.ust.agent.LTTngAgent;

public class Test
{
private static final int answer = 42;

public static void main(String[] argv) throws Exception
{
// create a logger
Logger logger = Logger.getLogger("jello");

// call this as soon as possible (before logging)
LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();

// log at will!
logger.info("some info");
logger.warning("some warning");
Thread.sleep(500);
logger.finer("finer information; the answer is " + answer);
Thread.sleep(123);
logger.severe("error!");

// not mandatory, but cleanerlttngAgent.dispose();
}
}


Here's the same example, this time using log4j:

import org.apache.log4j.Logger;
import org.apache.log4j.BasicConfigurator;
import org.lttng.ust.agent.LTTngAgent;

public class Test
{
private static final int answer = 42;

public static void main(String[] argv) throws Exception
{
// create and configure a logger
Logger logger = Logger.getLogger(Test.class);
BasicConfigurator.configure();

// call this as soon as possible (before logging)
LTTngAgent lttngAgent = LTTngAgent.getLTTngAgent();

// log at will!
logger.info("some info");
logger.warn("some warning");
Thread.sleep(500);
logger.debug("debug information; the answer is " + answer);
Thread.sleep(123);
logger.error("error!");
logger.fatal("fatal error!");

// not mandatory, but cleanerlttngAgent.dispose();
}
}


The LTTng-UST Java agent classes are packaged in a JAR file named
liblttng-ust-agent.jar
. It is typically located in
/usr/lib/lttng/java
. To compile the snippets above(saved as
Test.java
), do:

javac -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP Test.java

where
$LOG4JCP
is the log4j 1.2 JAR file path, if you're using log4j.

You can run the resulting compiled class like this:

java -cp /usr/lib/lttng/java/liblttng-ust-agent.jar:$LOG4JCP:. Test


Note:OpenJDK 7 is used for development and continuous integration, thus this version is directly supported. However, the LTTng-UST Java agent has also
been tested with OpenJDK 6.

Linux kernel

The Linux kernel can be instrumented for LTTng tracing, either its coresource code or a kernel module. It has to be noted that Linux isreadily traceable using LTTng since many parts of its source code arealready instrumented: this is the job of the upstreamLTTng-modulespackage.
This section presents how to add LTTng instrumentation where itdoes not currently exist and how to instrument custom kernel modules.

All LTTng instrumentation in the Linux kernel is based on an existinginfrastructure which bears the name of its main macro,
TRACE_EVENT()
.This macro is used to define tracepoints,each tracepoint having a name, usually with the
subsys_name

format,
subsys
being the subsystem name and
name
the specific event name.

Tracepoints defined with
TRACE_EVENT()
may be inserted anywhere inthe Linux kernel source code, after what callbacks, calledprobes,may be registered to execute some action when a tracepoint isexecuted. This mechanism is directly used
by ftrace and perf,but cannot be used as is by LTTng: an adaptation layer is added tosatisfy LTTng's specific needs.

With that in mind, this documentation does not cover the
TRACE_EVENT()
format and how to use it, but it is mandatory to understand it and useit to instrument Linux for LTTng. A series ofLWN articles explain
TRACE_EVENT()

in details:part 1,part 2, andpart
3.Once you master
TRACE_EVENT()
enough for your use case, continuereading this section so that you can add the LTTng adaptation layer ofinstrumentation.

This section first discusses the general method of instrumenting theLinux kernel for LTTng. This method is then reused for the specificcase of instrumenting a kernel module.

Instrumenting the Linux kernel for LTTng
This section explains strictly how to add custom LTTnginstrumentation to the Linux kernel. It does not explain how themacros actually work and the internal mechanics of the tracer.

You should have a Linux kernel source code tree to work with.Throughout this section, all file paths are relative to the root ofthis tree unless otherwise stated.

You will need a copy of the LTTng-modules Git repository:

git clone git://git.lttng.org/lttng-modules.git

The steps to add custom LTTng instrumentation to a Linux kernelinvolves defining and using the mainline
TRACE_EVENT()
tracepointsfirst, then writing and using the LTTng adaptation layer.

Defining/using tracepoints with mainline
TRACE_EVENT()
infrastructure

The first step is to define tracepoints using the mainline Linux
TRACE_EVENT()
macro and insert tracepoints where you want them.Your tracepoint definitions reside in a header file in
include/trace/events
. If you're adding tracepoints
to an existingsubsystem, edit its appropriate header file.

As an example, the following header file (let's call it
include/trace/events/hello.h
) defines one tracepoint using
TRACE_EVENT()
:

/* subsystem name is "hello" */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM hello

#if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_HELLO_H

#include <linux/tracepoint.h>

TRACE_EVENT(
/* "hello" is the subsystem name, "world" is the event name */
hello_world,

/* tracepoint function prototype */
TP_PROTO(int foo, const char* bar),

/* arguments for this tracepoint */
TP_ARGS(foo, bar),

/* LTTng doesn't need those */
TP_STRUCT__entry(),
TP_fast_assign(),
TP_printk("", 0)
);

#endif

/* this part must be outside protection */
#include <trace/define_trace.h>


Notice that we don't use any of the last three arguments: theyare left empty here because LTTng doesn't need them. You would only fill
TP_STRUCT__entry()
,
TP_fast_assign()
and
TP_printk()
if you wereto also use this tracepoint
for ftrace/perf.

1afc2

Once this is done, you may place calls to
trace_hello_world()
wherever you want in the Linux source code. As an example, let us placesuch a tracepoint in the
usb_probe_device()
static function(
drivers/usb/core/driver.c
):

/* called from driver core with dev locked */
static int usb_probe_device(struct device *dev)
{
struct usb_device_driver *udriver = to_usb_device_driver(dev->driver);
struct usb_device *udev = to_usb_device(dev);
int error = 0;

trace_hello_world(udev->devnum, udev->product);

/* ... */
}


This tracepoint should fire every time a USB device is plugged in.

At the top of
driver.c
, we need to include our actual tracepointdefinition and, in this case (one place per subsystem), define
CREATE_TRACE_POINTS
, which will create our tracepoint:

/* ... */

#include "usb.h"

#define CREATE_TRACE_POINTS
#include <trace/events/hello.h>

/* ... */


Build your custom Linux kernel. In order to use LTTng, make sure thefollowing kernel configuration options are enabled:

CONFIG_MODULES
(loadable module support)
CONFIG_KALLSYMS
(load all symbols for debugging/kksymoops)
CONFIG_HIGH_RES_TIMERS
(high resolution timer support)
CONFIG_TRACEPOINTS
(kernel tracepoint instrumentation)

Boot the custom kernel. The directory
/sys/kernel/debug/tracing/events/hello
should exist if everythingwent right, with a
hello_world
subdirectory.

Adding the LTTng adaptation layer
The steps to write the LTTng adaptation layer are, in yourLTTng-modules copy's source code tree:

In
instrumentation/events/lttng-module
,add a header
subsys.h
for your customsubsystem
subsys
and write yourtracepoint definitions using LTTng-modules macros in it.Those macros look like the mainline
kernel equivalents,but they present subtle, yet important differences.
In
probes
, create the C source file of the LTTng probe kernelmodule for your subsystem. It should be named
lttng-probe-subsys.c
.
Edit
probes/Makefile
so that the LTTng-modules projectbuilds your custom LTTng probe kernel module.
Build and install LTTng kernel modules.

Following our
hello_world
event example, here's the content of
instrumentation/events/lttng-module/hello.h
:

#undef TRACE_SYSTEM
#define TRACE_SYSTEM hello

#if !defined(_TRACE_HELLO_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_HELLO_H

#include <linux/tracepoint.h>

LTTNG_TRACEPOINT_EVENT(
/* format identical to mainline version for those */
hello_world,
TP_PROTO(int foo, const char* bar),
TP_ARGS(foo, bar),

/* possible differences */
TP_STRUCT__entry(
__field(int, my_int)
__field(char, char0)
__field(char, char1)
__string(product, bar)
),

/* notice the use of tp_assign()/tp_strcpy() and no semicolons */
TP_fast_assign(
tp_assign(my_int, foo)
tp_assign(char0, bar[0])
tp_assign(char1, bar[1])
tp_strcpy(product, bar)
),

/* This one is actually not used by LTTng either, but must be
* present for the moment.
*/
TP_printk("", 0)

/* no semicolon after this either */
)

#endif

/* other difference: do NOT include <trace/define_trace.h> */
#include "../../../probes/define_trace.h"


Some possible entries for
TP_STRUCT__entry()
and
TP_fast_assign()
,in the case of LTTng-modules, are shown in theLTTng-modules reference section.

The best way to learn how to use the above macros is to inspectexisting LTTng tracepoint definitions in
instrumentation/events/lttng-module
header files. Compare them with the Linux kernel mainline versionsin
include/trace/events
.

The next step is writing the LTTng probe kernel module C source file.This one is named
lttng-probe-subsys.c
in
probes
. You may always use the following template:

#include <linux/module.h>
#include "../lttng-tracer.h"

/* Build time verification of mismatch between mainline TRACE_EVENT()
* arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
*/
#include <trace/events/hello.h>

/* create LTTng tracepoint probes */
#define LTTNG_PACKAGE_BUILD
#define CREATE_TRACE_POINTS
#define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module

#include "../instrumentation/events/lttng-module/hello.h"

MODULE_LICENSE("GPL and additional rights");
MODULE_AUTHOR("Your name <your-email>");
MODULE_DESCRIPTION("LTTng hello probes");
MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
__stringify(LTTNG_MODULES_MINOR_VERSION) "."
__stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
LTTNG_MODULES_EXTRAVERSION);


Just replace
hello
with your subsystem name. In this example,
<trace/events/hello.h>
, which is the original mainline tracepointdefinition header, is included for verification purposes: theLTTng-modules build system is able to emit
an error at build time whenthe arguments of the mainline
TRACE_EVENT()
definitions do not matchthe ones of the LTTng-modules adaptation layer(
LTTNG_TRACEPOINT_EVENT()
).

Edit
probes/Makefile
and add your new kernel module objectnext to existing ones:

# ...

obj-m += lttng-probe-module.o
obj-m += lttng-probe-power.o

obj-m += lttng-probe-hello.o

# ...


Time to build! Point to your custom Linux kernel source tree usingthe
KERNELDIR
variable:

make KERNELDIR=/path/to/custom/linux

Finally, install modules:

sudo make modules_install

Tracing
The
Controlling tracing section explainshow to use the
lttng
tool to create and control tracing sessions.Although the
lttng
tool will load the appropriate
known LTTng kernelmodules when needed (by launching
root
's session daemon), it won'tload your custom
lttng-probe-hello
module by default. You need tomanually start an LTTng session daemon as
root
and use the
--extra-kmod-probes

option to append your custom probe module to thedefault list:

sudo pkill -u root lttng-sessiondsudo lttng-sessiond --extra-kmod-probes=hello

The first command makes sure any existing instance is killed. Ifyou're not interested in using the default probes, or if you onlywant to use a few of them, you could use
--kmod-probes
instead,which specifies an absolute list:

sudo lttng-sessiond --kmod-probes=hello,ext4,net,block,signal,sched

Confirm the custom probe module is loaded:

lsmod | grep lttng_probe_hello

The
hello_world
event should appear in the list when doing

lttng list --kernel | grep hello

You may now create an LTTng tracing session, enable the
hello_world
kernel event (and others if you wish) and start tracing:

sudo lttng create my-sessionsudo lttng enable-event --kernel hello_worldsudo lttng start

Plug a few USB devices, then stop tracing and inspect the trace (ifBabeltraceis installed):

sudo lttng stopsudo lttng view

Here's a sample output:

[15:30:34.835895035] (+?.?????????) hostname hello_world: { cpu_id = 1 }, { my_int = 8, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }
[15:30:42.262781421] (+7.426886386) hostname hello_world: { cpu_id = 1 }, { my_int = 9, char0 = 80, char1 = 97, product = "Patriot Memory" }
[15:30:48.175621778] (+5.912840357) hostname hello_world: { cpu_id = 1 }, { my_int = 10, char0 = 68, char1 = 97, product = "DataTraveler 2.0" }


Two USB flash drives were used for this test.

You may change your LTTng custom probe, rebuild it and reload it atany time when not tracing. Make sure you remove the old module(either by killing the root LTTng session daemon which loaded themodule in the first place, or by using
modprobe --remove
directly)before loading the updated one.

Instrumenting an out-of-tree Linux kernel module for
LTTng

Instrumenting a custom Linux kernel module for LTTng follows the exactsame steps asadding instrumentation to the Linux kernel itself,the only difference
being that your mainline tracepoint definitionheader doesn't reside in the mainline source tree, but in yourkernel module source tree.

The only reference to this mainline header is in the LTTng customprobe's source code (
probes/lttng-probe-hello.c
in our example), forbuild time verification:

/* ... */

/* Build time verification of mismatch between mainline TRACE_EVENT()
* arguments and LTTng adaptation layer LTTNG_TRACEPOINT_EVENT() arguments.
*/
#include <trace/events/hello.h>

/* ... */


The preferred, flexible way to include your module's mainlinetracepoint definition header is to put it in a specific directoryrelative to your module's root, e.g.,
tracepoints
, and include itrelative to your module's root directory in the LTTng
custom probe'ssource:

#include <tracepoints/hello.h>


You may then build LTTng-modules by adding your module's rootdirectory as an include path to the extra C flags:

make ccflags-y=-I/path/to/kernel/module KERNELDIR=/path/to/custom/linux

Using
ccflags-y
allows you to move your kernel module to anotherdirectory and rebuild the LTTng-modules project with no change tosource files.

LTTng logger ABI

The
lttng-tracer
Linux kernel module, installed by the LTTng-modulespackage, creates a special LTTng logger ABI file
/proc/lttng-logger
when loaded. Writing text data to this file generates an LTTng kerneldomain event named
lttng_logger
.

Unlike other kernel domain events,
lttng_logger
may be enabled byany user, not only root users or members of the tracing group.

To use the LTTng logger ABI, simply write a string to
/proc/lttng-logger
:

echo -n 'Hello, World!' > /proc/lttng-logger

The
msg
field of the
lttng_logger
event contains the recordedmessage.

Note:Messages are split in chunks of 1024 bytes.

The LTTng logger ABI is a quick and easy way to trace some events fromuser space through the kernel tracer. However, it is much more basicthan LTTng-UST: it's slower (involves system call round-trip to thekernel and only supports logging strings). The LTTng
logger ABI isparticularly useful for recording logs as LTTng traces from shellscripts, potentially combining them with other Linux kernel/user spaceevents.

Advanced techniques

This section presents some advanced techniques related toLTTng instrumenting.

Instrumenting a 32-bit application on a 64-bit system
In order to trace a 32-bit application running on a 64-bit system,LTTng must use a dedicated 32-bitconsumer daemon. This section discusses how tobuild that daemon (which
isnot part of the default 64-bit LTTngbuild) and the LTTng 32-bit tracing libraries, and how to instrumenta 32-bit application in that context.

Make sure you install all 32-bit versions of LTTng dependencies.Their names can be found in the
README.md
files of each LTTng packagesource. How to find and install them will vary depending on your targetLinux distribution.
gcc-multilib

is a common package name for themultilib version of GCC, which you will also need.

The following packages will be built for 32-bit support on a 64-bitsystem:
Userspace RCU,LTTng-UST and LTTng-tools.

Building 32-bit Userspace RCU
Follow this:

git clone git://git.urcu.so/urcu.git
cd urcu./bootstrap./configure --libdir=/usr/lib32 CFLAGS=-m32
makesudo make installsudo ldconfig

The
-m32
C compiler flag creates 32-bit object files and
--libdir
indicates where to install the resulting libraries.

Building 32-bit LTTng-UST
Follow this:

git clone http://git.lttng.org/lttng-ust.git cd lttng-ust./bootstrap./configure --prefix=/usr \
--libdir=/usr/lib32 \
CFLAGS=-m32 CXXFLAGS=-m32 \
LDFLAGS=-L/usr/lib32
makesudo make installsudo ldconfig

-L/usr/lib32
is required for the build to find the 32-bit versionsof Userspace RCU and other dependencies.

Note:Depending on your Linux distribution, 32-bit libraries could be installed at a different location than
/usr/lib32
. For example, Debian is known to install some 32-bit libraries in
/usr/lib/i386-linux-gnu
.

In this case, make sure to set
LDFLAGS
to all the relevant 32-bit library paths, e.g.,
LDFLAGS="-L/usr/lib32 -L/usr/lib/i386-linux-gnu"
.

Note:You may add options to
./configure
if you need them, e.g., for Java and SystemTap support. Look at
./configure --help
for more information.

Building 32-bit LTTng-tools
Since the host is a 64-bit system, most 32-bit binaries and libraries ofLTTng-tools are not needed; the host will use their 64-bit counterparts.The required step here is building and installing a 32-bit consumerdaemon.

Follow this:

git clone http://git.lttng.org/lttng-tools.git cd lttng-ust./bootstrap./configure --prefix=/usr \
--libdir=/usr/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
LDFLAGS=-L/usr/lib32
make
cd src/bin/lttng-consumerdsudo make installsudo ldconfig

The above commands build all the LTTng-tools project as 32-bitapplications, but only installs the 32-bit consumer daemon.

Building 64-bit LTTng-tools
Finally, you need to build a 64-bit version of LTTng-tools which isaware of the 32-bit consumer daemon previously built and installed:

make clean./bootstrap./configure --prefix=/usr \
--with-consumerd32-libdir=/usr/lib32 \
--with-consumerd32-bin=/usr/lib32/lttng/libexec/lttng-consumerd
makesudo make installsudo ldconfig

Henceforth, the 64-bit session daemon will automatically find the32-bit consumer daemon if required.

Building an instrumented 32-bit C application
Let us reuse the Hello world example ofTracing your own user application(Gettingstarted chapter).

The instrumentation process is unaltered.

First, a typical 64-bit build (assuming you're running a 64-bit system):

gcc -o hello64 -I. hello.c hello-tp.c -ldl -llttng-ust

Now, a 32-bit build:

gcc -o hello32 -I. -m32 hello.c hello-tp.c -L/usr/lib32 \
-ldl -llttng-ust -Wl,-rpath,/usr/lib32

The
-rpath
option, passed to the linker, will make the dynamic loadercheck for libraries in
/usr/lib32
before looking in its default paths,where it should find the 32-bit version of
liblttng-ust
.

Running 32-bit and 64-bit versions of an instrumented C application
Now, both 32-bit and 64-bit versions of the Hello world example abovecan be traced in the same tracing session. Use the
lttng
tool as usualto create a tracing session and start tracing:

lttng create session-3264lttng enable-event -u -a./hello32./hello64lttng stop

Use
lttng view
to verify both processes weresuccessfully traced.

Controlling tracing

Once you're in possession of a software that is properlyinstrumented for LTTng tracing, be it thanks tothe built-in LTTng probes for the Linux kernel, a custom userapplication
or a custom Linux kernel, all that is left is actuallytracing it. As a user, you control LTTng tracing using a single commandline interface: the
lttng
tool. This tool uses
liblttng-ctl
behindthe scene to connect to and communicate withsession daemons. LTTngsession daemons may either be started manually (
lttng-sessiond
) orautomatically by the
lttng
command when needed. Trace data maybe forwarded to the network and used elsewhere using an LTTng relaydaemon (
lttng-relayd
).

The manpages of
lttng
,
lttng-sessiond
and
lttng-relayd
are prettycomplete, thus this section is not an online copy of the latter (weleave this contents for theOnline
LTTng manpages section).This section is rather a tour of LTTngfeatures through practical examples and tips.

If not already done, make sure you understand the core conceptsand how LTTng components connect together by reading theUnderstanding LTTng chapter; this sectionassumes
you are familiar with them.

Creating and destroying tracing sessions

Whatever you want to do with
lttng
, it has to happen inside atracing session, created beforehand. A session, in general, is aper-user container of state. A tracing session is no different; itkeeps a specific state of stuff like:
session name
enabled/disabled channels with associated parameters
enabled/disabled events with associated log levels and filters
context information added to channels
tracing activity (started or stopped)

and more.

A single user may have many active tracing sessions. LTTng sessiondaemons are the ultimate owners and managers of tracing sessions. Foruser space tracing, each user has its own session daemon. Since Linuxkernel tracing requires root privileges, only
root
'ssession daemonmay enable and trace kernel events. However,
lttng
has a
--group
option (which is passed to
lttng-sessiond
when starting it) tospecify the name of atracing group which selected users may be partof to be allowed to communicate with
root
's session daemon. Bydefault, the tracing group name is
tracing
.

To create a tracing session, do:

lttng create my-session

This will create a new tracing session named
my-session
and make itthe current one. If you don't specify any name (calling only
lttng create
), your tracing session will be named
auto
. Tracesare written in
~/lttng-traces/session-

followedby the tracing session's creation date/time by default, where
session
is the tracing session name. To save themat a different location, use the
--output
option:

lttng create --output /tmp/some-directory my-session

You may create as many tracing sessions as you wish:

lttng create other-sessionlttng create yet-another-session

You may view all existing tracing sessions using the
list
command:

lttng list

The state of a current tracing session is kept in
~/.lttngrc
. Eachinvocation of
lttng
reads this file to set its current tracingsession name so that you don't have to specify a session name for eachcommand. You could edit
this file manually, but the preferred way toset the current tracing session is to use the
set-session
command:

lttng set-session other-session

Most
lttng
commands accept a
--session
option to specify the nameof the target tracing session.

Any existing tracing session may be destroyed using the
destroy
command:

lttng destroy my-session

Providing no argument to
lttng destroy
will destroy the currenttracing session. Destroying a tracing session will stop any tracingrunning within the latter. Destroying a tracing session frees resourcesacquired by the session daemon and tracerside, making sure to flushall trace data.

You can't do much with LTTng using only the
create
,
set-session
and
destroy
commands of
lttng
, but it is essential to know them inorder to control LTTng tracing, which always happen within the scope ofa tracingsession.

Enabling and disabling events

Inside a tracing session, individual events may be enabled or disabledso that tracing them may or may not generate trace data.

We sometimes use the term event metonymically throughout this text torefer to a specific condition, orrule, that could lead, whensatisfied, to an actual occurring event (a point at a specific positionin source code/binary program, logical
processor and time capturingsome payload) being recorded as trace data. This specific condition iscomposed of:

A domain (kernel, user space,
java.util.logging
, or log4j)(required).
One or many instrumentation points in source code or binaryprogram (tracepoint name, address, symbol name, function name,logger name, etc.) to be executed (required).
A log level (each instrumentation point declares its own loglevel) or log level range to match (optional; only valid for userspace domain).
A custom user expression, or filter, that must evaluate totrue when a tracepoint is executed (optional; only valid for userspace domain).

All conditions are specified using arguments passed to the
enable-event
command of the
lttng
tool.

Condition 1 is specified using either
--kernel
/
-k
(kernel),
--userspace
/
-u
(user space),
--jul
/
-j
(JUL), or
--log4j
/
-l

(log4j).Exactly one of those four arguments must be specified.

Condition 2 is specified using one of:

--tracepoint
: tracepoint
--probe
: dynamic probe (address, symbol name or combinationof both in binary program; only valid for kernel domain)
--function
: function entry/exit (address, symbol name orcombination of both in binary program; only valid for kernel domain)
--syscall
: system call entry/exit (only valid for kerneldomain)

When none of the above is specified,
enable-event
defaults tousing
--tracepoint
.

Condition 3 is specified using one of:

--loglevel
: log level range from 0 to a specific log level
--loglevel-only
: specific log level

See
lttng enable-event --help
for the complete list of log levelnames.

Condition 4 is specified using the
--filter
option. This filter isa C-like expression, potentially reading real-time values of eventfields, that has to evaluate totrue for the condition to be satisfied.Event fields are read using plain
identifiers while context fieldsmust be prefixed with
$ctx.
. See
lttng enable-event --help
forall usage details.

The aforementioned arguments are combined to create and enable events.Each unique combination of arguments leads to a differentenabled event. The log level and filter arguments are optional, theirdefault values being respectively all log levels
and a filter whichalways returns true.

Here are a few examples (you mustcreate a tracing sessionfirst):

lttng enable-event -u --tracepoint my_app:hello_worldlttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_WARNINGlttng enable-event -u --tracepoint 'my_other_app:*'lttng enable-event -u --tracepoint my_app:foo_bar \
--filter 'some_field <= 23 && !other_field'lttng enable-event -k --tracepoint sched_switchlttng enable-event -k --tracepoint gpio_valuelttng enable-event -k --function usb_probe_device usb_probe_devicelttng enable-event -k --syscall --all

The wildcard symbol,
*
, matches anything and may only be used atthe end of the string when specifying atracepoint. Make sure touse it between single quotes in your favorite shell to avoidundesired shell expansion.

System call events can be enabled individually, too:

lttng enable-event -k --syscall openlttng enable-event -k --syscall readlttng enable-event -k --syscall fork,chdir,pipe

The complete list of available system call events can beobtained using

lttng list --kernel --syscall

You can see a list of events (enabled or disabled) using

lttng list some-session

where
some-session
is the name of the desired tracing session.

What you're actually doing when enabling events with specific conditionsis creating awhitelist of traceable events for a given channel.Thus, the following case presents redundancy:

lttng enable-event -u --tracepoint my_app:hello_youlttng enable-event -u --tracepoint my_app:hello_you --loglevel TRACE_DEBUG

The second command, matching a log level range, is useless since the firstcommand enables all tracepoints matching the same name,
my_app:hello_you
.

Disabling an event is simpler: you only need to provide the eventname to the
disable-event
command:

lttng disable-event --userspace my_app:hello_you

This name has to match a name previously given to
enable-event
(ithas to be listed in the output of
lttng list some-session
).The
*
wildcard is supported, as long as you also used it in aprevious
enable-event

invocation.

Disabling an event does not add it to some blacklist: it simply removesit from its channel's whitelist. This is why you cannot disable an eventwhich wasn't previously enabled.

A disabled event will not generate any trace data, even if all itsspecified conditions are met.

Events may be enabled and disabled at will, either when LTTng tracersare active or not. Events may be enabled before a user space applicationis even started.

Basic tracing session control

Once you havecreated a tracing sessionandenabled
one or more events,you may activate the LTTng tracers for the current tracing session atany time:

lttng start

Subsequently, you may stop the tracers:

lttng stop

LTTng is very flexible: user space applications may be launched beforeor after the tracers are started. Events will only be recorded if theyare properly enabled and if they occur while tracers are started.

A tracing session name may be passed to both the
start
and
stop
commands to start/stop tracing a session other than the current one.

Enabling and disabling channels

As mentioned in theUnderstanding LTTng chapter, enabledevents are contained in a specific
channel, itself contained in aspecific tracing session. A channel is a group of events withtunable parameters (event loss mode, sub-buffer size, number ofsub-buffers, trace file sizes and count, etc.). A given channel mayonly be responsible for enabled events
belonging to one domain: eitherkernel or user space.

If you only used the
create
,
enable-event
and
start
/
stop
commands of the
lttng
tool so far, one or two channels wereautomatically created for you (one for the kernel domain and/or onefor the user space domain). The default channels are both named
channel0
; channels
from different domains may have the same name.

The current channels of a given tracing session can be viewed with

lttng list some-session

where
some-session
is the name of the desired tracing session.

To create and enable a channel, use the
enable-channel
command:

lttng enable-channel --kernel my-channel

This will create a kernel domain channel named
my-channel
withdefault parameters in the current tracing session.

Note:Because of a current limitation, all channels must becreated prior to beginning tracing in a given tracing session, i.e. before the first time you do
lttng start
.

Since a channel is automatically created by
enable-event
only for the specified domain, you cannot, for example, enable a kernel domain event, start tracing and then enable a user space domain event because no user space channel exists yet and
it's too late to create one.

For this reason, make sure to configure your channels properly before starting the tracers for the first time!

Here's another example:

lttng enable-channel --userspace --session other-session --overwrite \
--tracefile-size 1048576 1mib-channel

This will create a user space domain channel named
1mib-channel
inthe tracing session named
other-session
that loses new events byoverwriting previously recorded events (instead of the default mode ofdiscarding newer ones) and saves
trace files with a maximum size of1 MiB each.

Note that channels may also be created using the
--channel
option ofthe
enable-event
command when the provided channel name doesn't existfor the specified domain:

lttng enable-event --kernel --channel some-channel sched_switch

If no kernel domain channel named
some-channel
existed before callingthe above command, it would be created with default parameters.

You may enable the same event in two different channels:

lttng enable-event --userspace --channel my-channel app:tplttng enable-event --userspace --channel other-channel app:tp

If both channels are enabled, the occurring
app:tp
event willgenerate two recorded events, one for each channel.

Disabling a channel is done with the
disable-event
command:

lttng disable-event --kernel some-channel

The state of a channel precedes the individual states of events withinit: events belonging to a disabled channel, even if they areenabled, won't be recorded.

Fine-tuning channels
There are various parameters that may be fine-tuned with the
enable-channel
command. The latter are well documented inthe manpage of
lttng
and in
the
Channel section of theUnderstanding LTTng chapter. For basictracing needs, their default values should be just fine, but here are afew examples to break the ice.

As the frequency of recorded events increases—either because theevent throughput is actually higher or because you enabled more eventsthan usual—event loss might be experienced. Since LTTng neverwaits, by design, for sub-buffer space availability
(non-blockingtracer), when a sub-buffer is full and no empty sub-buffers are left,there are two possible outcomes: either the new events that do not fitare rejected, or they start replacing the oldest recorded events.The choice of which algorithm to use is
a per-channel parameter, thedefault being discarding the newest events until there is some spaceleft. If your situation always needs the latest events at the expenseof writing over the oldest ones, create a channel with the
--overwrite
option:

lttng enable-channel --kernel --overwrite my-channel

When an event is lost, it means no space was available in anysub-buffer to accommodate it. Thus, if you want to cope with sporadichigh event throughput situations and avoid losing events, you need toallocate more room for storing them in memory. This can
be done byeither increasing the size of sub-buffers or by adding sub-buffers.The following example creates a user space domain channel with16 sub-buffers of 512 kiB each:

lttng enable-channel --userspace --num-subbuf 16 --subbuf-size 512k big-channel

Both values need to be powers of two, otherwise they are rounded upto the next one.

Two other interesting available parameters of
enable-channel
are
--tracefile-size
and
--tracefile-count
, which respectively limitthe size of each trace file and the their count for a given channel.When the number of written
trace files reaches its limit for a givenchannel-CPU pair, the next trace file will overwrite the very firstone. The following example creates a kernel domain channel with amaximum of three trace files of 1 MiB each:

lttng enable-channel --kernel --tracefile-size 1M --tracefile-count 3 my-channel

An efficient way to make sure lots of events are generated is enablingall kernel events in this channel and starting the tracer:

lttng enable-event --kernel --all --channel my-channellttng start

After a few seconds, look at trace files in your tracing sessionoutput directory. For two CPUs, it should look like:

my-channel_0_0    my-channel_1_0
my-channel_0_1    my-channel_1_1
my-channel_0_2    my-channel_1_2


Amongst the files above, you might see one in each group with a sizelower than 1 MiB: they are the files currently being written.

Since all those small files are valid LTTng trace files, LTTng traceviewers may read them. It is the viewer's responsibility to properlymerge the streams so as to present an ordered list to the user.Babeltracemerges
LTTng trace files correctly and is fast at doing it.

Adding some context to channels

If you read all the sections ofControlling tracing so far, you should beable to create tracing sessions, create and enable channels and eventswithin them and start/stop
the LTTng tracers. Event fields recorded intrace files provide important information about occurring events, butsometimes external context may help you solve a problem faster. Thissection discusses how to add context information to events of aspecific channel
using the
lttng
tool.

There are various available context values which can accompany eventsrecorded by LTTng, for example:

process information:

identifier (PID)
name
priorityscheduling priority (niceness)
thread identifier (TID)

the hostname of the system on which the event occurred
plenty of performance counters using perf:

CPU cycles, stalled cycles, idle cycles, etc.
cache misses
branch instructions, misses, loads, etc.
CPU faults
etc.

The full list is available in the output of
lttng add-context --help
.Some of them are reserved for a specific domain (kernel oruser space) while others are available for both.

To add context information to one or all channels of a given tracingsession, use the
add-context
command:

lttng add-context --userspace --type vpid --type perf:thread:cpu-cycles

The above example adds the virtual process identifier and per-threadCPU cycles count values to all recorded user space domain events of thecurrent tracing session. Use the
--channel
option to select a specificchannel:

lttng add-context --kernel --channel my-channel --type tid

adds the thread identifier value to all recorded kernel domain eventsin the channel
my-channel
of the current tracing session.

Beware that context information cannot be removed from channels onceit's added for a given tracing session.

Saving and loading tracing session configurations

Configuring a tracing session may be long: creating and enablingchannels with specific parameters, enabling kernel and user spacedomain events with specific log levels and filters, adding contextto some channels, etc. If you're going to use LTTng to solve
realworld problems, chances are you're going to have to record events usingthe same tracing session setup over and over, modifying a few variableseach time in your instrumented program or environment. To avoidconstant tracing session reconfiguration, the
lttng

tool is able tosave and load tracing session configurations to/from XML files.

To save a given tracing session configuration, do:

lttng save my-session

where
my-session
is the name of the tracing session to save. Tracingsession configurations are saved to
~/.lttng/sessions
by default;use the
--output-path
option to change this destination directory.

All configuration parameters are saved:

tracing session name
trace data output path
channels with their state and all their parameters
context information added to channels
events with their state, log level and filter
tracing activity (started or stopped)

To load a tracing session, simply do:

lttng load my-session

or, if you used a custom path:

lttng load --input-path /path/to/my-session.lttng

Your saved tracing session will be restored as if you just configuredit manually.

Sending trace data over the network

The possibility of sending trace data over the network comes as abuilt-in feature of LTTng-tools. For this to be possible, an LTTngrelay daemon must be executed and listening on the machine wheretrace data is to be received, and the user must create
a tracingsession using appropriate options to forward trace data to the remoterelay daemon.

The relay daemon listens on two different TCP ports: one for controlinformation and the other for actual trace data.

Starting the relay daemon on the remote machine is as easy as:

lttng-relayd

This will make it listen to its default ports: 5342 for control and5343 for trace data. The
--control-port
and
--data-port
options maybe used to specify different ports.

Traces written by
lttng-relayd
are written to
~/lttng-traces/hostname/session
bydefault, where
hostname
is the host name of thetraced (monitored) system and
session
is thetracingsession name. Use the
--output
option to write trace dataoutside
~/lttng-traces
.

On the sending side, a tracing session must be created using the
lttng
tool with the
--set-url
option to connect to the distantrelay daemon:

lttng create my-session --set-url net://distant-host

The URL format is described in the output of
lttng create --help
.The above example will use the default ports; the
--ctrl-url
and
--data-url
options may be used to set the control and data URLsindividually.

Once this basic setup is completed and the connection is established,you may use the
lttng
tool on the target machine as usual; everythingyou do will be transparently forwarded to the remote machine if needed.For example, a parameter changing
the maximum size of trace files willhave an effect on the distant relay daemon actually writing the trace.

Viewing events as they arrive

We have seen how trace files may be produced by LTTng out of generatedapplication and Linux kernel events. We have seen that those trace filesmay be either recorded locally by consumer daemons or remotely usinga relay daemon. And we have seen that the maximumsize and count oftrace files is configurable for each channel. With all those features,it's still not possible to read a trace file as it is being writtenbecause it could be incomplete and appear corrupted to the viewer.There is a way to view events as they
arrive, however: usingLTTng live.

LTTng live is implemented, in LTTng, solely on the relay daemon side.As trace data is sent over the network to a relay daemon by a (possiblyremote) consumer daemon, atee may be created: trace data will berecorded to trace filesas well as
being transmitted to aconnected live viewer:

In order to use this feature, a tracing session must created in livemode on the target system:

lttng create --live

An optional parameter may be passed to
--live
to set the intervalof time (in microseconds) between flushes to the network(1 second is the default):

lttng create --live 100000

will flush every 100 ms.

If no network output is specified to the
create
command, a localrelay daemon will be spawned. In this very common case, viewing a livetrace is easy: enable events and start tracing as usual, then use
lttng view
to start the default
live viewer:

lttng view

The correct arguments will be passed to the live viewer so that itmay connect to the local relay daemon and start reading live events.

You may also wish to use a live viewer not running on the targetsystem. In this case, you should specify a network output when usingthe
create
command (
--set-url
or
--ctrl-url
/
--data-url
options).A distant
LTTng relay daemon should also be started to receive controland trace data. By default,
lttng-relayd
listens on 127.0.0.1:5344for an LTTng live connection. Otherwise, the desired URL may bespecified using its
--live-port
option.

The
babeltrace
viewer supports LTTng live as one of its input formats.
babeltrace
isthe default viewer when using
lttng view
.
To use it manually, firstlist active tracing sessions by doing the following (assuming the relaydaemon to connect to runs on the same host):

babeltrace --input-format lttng-live net://localhost

Then, choose a tracing session and start viewing events as they arriveusing LTTng live, e.g.:

babeltrace --input-format lttng-live net://localhost/host/hostname/my-session

Taking a snapshot

The normal behavior of LTTng is to record trace data as trace files.This is ideal for keeping a long history of events that occurred onthe target system and applications, but may be too much data in somesituations. For example, you may wish to trace your
applicationcontinuously until some critical situation happens, in which case youwould only need the latest few recorded events to perform the desiredanalysis, not multi-gigabyte trace files.

LTTng has an interesting feature called snapshots. When creatinga tracing session in snapshot mode, no trace files are written; thetracers' sub-buffers are constantly overwriting the oldest recordedevents with the newest. At any time, either when
the tracers are startedor stopped, you may take a snapshot of those sub-buffers.

There is no difference between the format of a normal trace file and theformat of a snapshot: viewers of LTTng traces will also support LTTngsnapshots. By default, snapshots are written to disk, but they may alsobe sent over the network.

To create a tracing session in snapshot mode, do:

lttng create --snapshot my-snapshot-session

Next, enable channels, events and add context to channels as usual.Once a tracing session is created in snapshot mode, channels will beforced to use theoverwrite
mode(
--overwrite
option of the
enable-channel
command; also calledflight recorder mode) and have an
mmap()
channel type(
--output mmap
).

Start tracing. When you're ready to take a snapshot, do:

lttng snapshot record --name my-snapshot

This will record a snapshot named
my-snapshot
of all channels ofall domains of the current tracing session. By default, snapshots filesare recorded in the path returned by
lttng snapshot list-output
. Youmay change this path or decide
to send snapshots over the networkusing either:

an output path/URL specified when creating the tracing session(
lttng create
)
an added snapshot output path/URL using
lttng snapshot add-output

an output path/URL provided directly to the
lttng snapshot record
command

Method 3 overrides method 2 which overrides method 1. When specifyinga URL, a relay daemon must be listening on some machine (seeSending trace data over
the network).

If you need to make absolutely sure that the output file won't belarger than a certain limit, you can set a maximum snapshot size whentaking it with the
--max-size
option:

lttng snapshot record --name my-snapshot --max-size 2M

Older recorded events will be discarded in order to respect thismaximum size.

Machine interface

The
lttng
tool aims at providing a command output as human-readable aspossible. While this output is easy to parse by a human being, machineswill have a hard time.

This is why the
lttng
tool provides the general
--mi
option, whichmust specify a machine interface output format. As of the latestLTTng stable release, only the
xml
format is supported. A schemadefinition (XSD) is madeavailableto
ease the integration with external tools as much as possible.

The
--mi
option can be used in conjunction with all
lttng
commands.Here are some examples:

lttng --mi xml create some-sessionlttng --mi xml list some-sessionlttng --mi xml list --kernellttng --mi xml enable-event --kernel --syscall openlttng --mi xml start

Reference

This chapter presents various references for LTTng packages such as linksto online manpages, tables needed by the rest of the text, descriptionsof library functions, etc.

Online LTTng manpages

LTTng packages currently install the following manpages, availableonline using the links below:

LTTng-tools

lttng

lttng-sessiond

lttng-relayd


LTTng-UST

lttng-gen-tp

lttng-ust

lttng-ust-cyg-profile

lttng-ust-dl


LTTng-UST

This section presents references of the LTTng-UST package.

LTTng-UST library (
liblttng‑ust
)

The LTTng-UST library, or
liblttng-ust
, is the main shared objectagainst which user applications are linked to make LTTng user spacetracing possible.

The
C application guide shows the completeprocess to instrument, build and run a C/C++ application usingLTTng-UST, while this section contains a few important tables.

Tracepoint fields macros (for
TP_FIELDS()
)

The available macros to define tracepoint fields, which should be listedwithin
TP_FIELDS()
in
TRACEPOINT_EVENT()
, are:

MacroDescription/arguments
ctf_integer(t,
n, e)

ctf_integer_nowrite(t,
n, e)


Standard integer, displayed in base 10

t
integer C type (
int
,
long
,
size_t
, etc.)

n
field name
e
argument expression

ctf_integer_hex(t,
n, e)
Standard integer, displayed in base 16

t
integer C type
n
field name
e
argument expression

ctf_integer_network(t,
n, e)
Integer in network byte order (big endian), displayed in base 10

t
integer C type
n
field name
e
argument expression

ctf_integer_network_hex(t,
n, e)
Integer in network byte order, displayed in base 16

t
integer C type
n
field name
e
argument expression

ctf_float(t, n,e)

ctf_float_nowrite(t,
n, e)


Floating point number

t
floating point number C type (
float
,
double
)
n
field name
e
argument expression

ctf_string(n,
e)

ctf_string_nowrite(n,
e)


Null-terminated string; undefined behavior if
e
is
NULL


n
field name
e
argument expression

ctf_array(t, n,e,s)

ctf_array_nowrite(t,
n, e, s)


Statically-sized array of integers

t
array element C type
n
field name
e
argument expression
s
number of elements

ctf_array_text(t,
n, e, s)

ctf_array_nowrite_text(t,
n, e, s)


Statically-sized array, printed as text; no need to be null-terminated

t
array element C type (always
char
)
n
field name
e
argument expression
s
number of elements

ctf_sequence(t,
n, e, T,
E)

ctf_sequence_nowrite(t,
n, e, T,
E)


Dynamically-sized array of integers; type of
E
needs to be unsigned

t
sequence element C type
n
field name
e
argument expression
T
length expression C type
E
length expression

ctf_sequence_text(t,
n, e, T,
E)

ctf_sequence_text_nowrite(t,
n, e, T,
E)


Dynamically-sized array, displayed as text; no need to be null-terminated; undefined behavior if
e
is
NULL


t
sequence element C type (always
char
)
n
field name
e
argument expression
T
length expression C type
E
length expression

The
_nowrite
versions omit themselves from the session trace, but areotherwise identical. This means the
_nowrite
fields won't be writtenin the recorded trace. Their primary purpose is to make someof the event context available to
theevent filters without having tocommit the data to sub-buffers.

Tracepoint log levels (for
TRACEPOINT_LOGLEVEL()
)

The following table shows the available log level values for the
TRACEPOINT_LOGLEVEL()
macro:

Enum labelEnum valueDescription
TRACE_EMERG
0System is unusable
TRACE_ALERT
1Action must be taken immediately
TRACE_CRIT
2Critical conditions
TRACE_ERR
3Error conditions
TRACE_WARNING
4Warning conditions
TRACE_NOTICE
5Normal, but significant, condition
TRACE_INFO
6Informational message
TRACE_DEBUG_SYSTEM
7Debug information with system-level scope (set of programs)
TRACE_DEBUG_PROGRAM
8Debug information with program-level scope (set of processes)
TRACE_DEBUG_PROCESS
9Debug information with process-level scope (set of modules)
TRACE_DEBUG_MODULE
10Debug information with module (executable/library) scope (set of units)
TRACE_DEBUG_UNIT
11Debug information with compilation unit scope (set of functions)
TRACE_DEBUG_FUNCTION
12Debug information with function-level scope
TRACE_DEBUG_LINE
13Debug information with line-level scope (
TRACEPOINT_EVENT
default)
TRACE_DEBUG
14Debug-level message
Higher log level numbers imply the most verbosity (expect higher tracingthroughput). Log levels 0 through 6 and log level 14 matchsysloglevel semantics.
Log levels 7 through 13 offer more fine-grainedselection of debug information.

LTTng-modules

This section presents references of the LTTng-modules package.

Tracepoint fields macros (for
TP_STRUCT__entry()
)

This table describes possible entries for the
TP_STRUCT__entry()
partof
LTTNG_TRACEPOINT_EVENT()
:

MacroDescription/arguments
__field(t, n)


Standard integer, displayed in base 10

t
integer C type (
int
,
unsigned char
,
size_t
, etc.)
n
field name

__field_hex(t,
n)


Standard integer, displayed in base 16

t
integer C type
n
field name

__field_oct(t,
n)


Standard integer, displayed in base 8

t
integer C type
n
field name

__field_network(t,
n)


Integer in network byte order (big endian), displayed in base 10

t
integer C type
n
field name

__field_network_hex(t,
n)


Integer in network byte order (big endian), displayed in base 16

t
integer C type
n
field name

__array(t, n,s)


Statically-sized array, elements displayed in base 10

t
array element C type
n
field name
s
number of elements

__array_hex(t,
n, s)


Statically-sized array, elements displayed in base 16

t
array element C type
n
field name
s
number of elements

__array_text(t,
n, s)


Statically-sized array, displayed as text

t
array element C type (always
char
)
n
field name
s
number of elements

__dynamic_array(t,
n, s)


Dynamically-sized array, displayed in base 10

t
array element C type
n
field name
s
length C expression

__dynamic_array_hex(t,
n, s)


Dynamically-sized array, displayed in base 16

t
array element C type
n
field name
s
length C expression

__dynamic_array_text(t,
n, s)


Dynamically-sized array, displayed as text

t
array element C type (always
char
)
n
field name
s
length C expression

__string(n, s)


Null-terminated string; undefined behavior if
s
is
NULL


n
field name
s
string source (pointer)

The above macros should cover the majority of cases. For advanced items,see
probes/lttng-events.h
.

Tracepoint assignment macros (for
TP_fast_assign()
)

This table describes possible entries for the
TP_fast_assign()
partof
LTTNG_TRACEPOINT_EVENT()
:

MacroDescription/arguments
tp_assign(d, s)


Assignment of C expression
s
to tracepoint field
d


d
name of destination tracepoint field
s
source C expression (may refer to tracepoint arguments)

tp_memcpy(d, s,l)


Memory copy of
l
bytes from
s
to tracepoint field
d
(use with array fields)

d
name of destination tracepoint field
s
source C expression (may refer to tracepoint arguments)
l
number of bytes to copy

tp_memcpy_from_user(d,
s, l)


Memory copy of
l
bytes from user space
s
to tracepoint field
d
(use with array fields)

d
name of destination tracepoint field
s
source C expression (may refer to tracepoint arguments)
l
number of bytes to copy

tp_memcpy_dyn(d,s)


Memory copy of dynamically-sized array from
s
to tracepoint field
d
; number of bytes is known from the field's length expression (use with dynamically-sized array fields)

d
name of destination tracepoint field
s
source C expression (may refer to tracepoint arguments)
l
number of bytes to copy

tp_strcpy(d, s)


String copy of
s
to tracepoint field
d
(use with string fields)

d
name of destination tracepoint field
s
source C expression (may refer to tracepoint arguments)

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  Linux debug tools