Work Tracker
A library to monitor threads and requests. It provides advanced logging capabilities, and protects the application's JVMs from too many requests and from long running requests that would eventually turn into zombies.
Features:
- Has the ability to log contextual thread metadata to
Elasticsearch
for web requests - Checks if the number of threads do not exceed the maximum number of resources than the JVM permits (i.e. database connections, memory resources, etc.).
RequestBouncer
handles those checks and allows new threads to proceed only if they are within theConnection Limits
. - Kills threads that take too long to respond, aka
Zombies
- Adds exception names to the logs of faulty requests for tracking bugs better, see
RootCauseTurboFilter
- Provides contextual thread metadata for background tasks, see
MdcExecutor
Setup
- Java Web Projects: See Readme
- Spring Projects: See Readme
- Spring Boot Projects: See Readme
- Other Projects: See Readme
Common Configuration and Usage
The following instructions apply to all of work-tracker's support modules.
Whitelisting Jobs
If you expect a job to take longer than 5 minutes (i.e. uploading/downloading a file), you may want to whitelist
that job. Use Work#setMaxTime(long)
to update the time for Zombie detection.
Logback features specific to this library
This turbo filter adds the class name of the root cause
and the cause
to the MDC when an exception occurs. Those class names will exist until the request ends to provide an easy way to trace down a faulty request from the beginning of the exception to the end of that request.
turboFilter(RootCauseTurboFilter)
Optional configuration for field names:
//...
turboFilter(RootCauseTurboFilter) {
causeFieldName = "cause_field_name" // Optional, if you want some other value for causeFieldName
rootCauseFieldName = "root_cause_field_name" // Optional, if you want some other value for rootCauseFieldName
}
//...
This JSON Provider is necessary if ZombieDetector
is used. This provider gets the thread name
of the actual zombie work instead of the ZombieLogger
class. Thus, making it easier to find out what work was zombie
Configuration:
encoder(LoggingEventCompositeJsonEncoder) {
providers(LoggingEventJsonProviders) {
//...
mdc(MdcJsonProvider) {
excludeMdcKeyName = 'thread_name' // Avoid the need to overwrite by threadName()
}
threadName(MdcThreadNameJsonProvider)
//...
}
}
See example
Logging Utilities
Using Logger to log information
private Logger logger = LoggerFactory.getLogger(CurrentClass.class);
//...
logger.info("This is an info log"); // log some information
logger.warn("This is a warn log"); // log a warning
try {
//...
} catch(Exception e){
logger.error("This is an error log", e); // log an error
}
Using putInContext
to add context to logs
putInContext
is a wrapper for the MDC (Mapped Diagnostic Context)
. It puts the key-value pair in the MDC and in the current work payload, thus, making sure that the ZombieDetector
will have all the work metadata to log to Elasticsearch in the case where you need to track and debug your application for those Zombies.
If you want some context variables to persist for a given request across multiple logs, use the putInContext
to add those context variables. For example:
// Use the OutstandingWork to put the values to the MDC for the current work
outstandingWork.putInContext("some_id", "some value");
Retrieving the OutstandingWork in the Java Module
(i.e. using HttpServlet
)
// Get the OutstandingWork from the Servlet Context
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
OutstandingWork<?> outstandingWork = (OutstandingWork<?>) request
.getServletContext().getAttribute(OUTSTANDING_ATTR);
//... add your values to the context using putInContext
}
To make life simpler, you can also have a utility class to put the context in the outstandingWork. See example, and its initialization
Retrieving the OutstandingWork in the Spring Module
// Using field injection
@Autowired
private OutstandingWork<? extends SpringWork> outstandingWork;
//...
// OR using constructor injection (use either one, but not both)
private OutstandingWork<? extends SpringWork> outstandingWork;
//...
public SomeClass(@Autowired OutstandingWork<? extends SpringWork> outstandingWork) {
this.outstandingWork = outstandingWork;
}
Now, each of your subsequent logs will have those values in the MDC, unless you clear the MDC (and this library automatically clears the MDC at the end of each request to avoid having stale context).
Using Structured Arguments for individual log context
If some context variables are specific to a single log (i.e. the sender id for a certain message, elapsed time, etc.), use StructuredArguments.keyValue()
to add those context variables. Those variables will not persist across all the logs but they will be available for that log only. For example:
logger.info("{} sent a message", keyValue("sender_id", "some sender value"));
keyValue
will turn the message into "sender_id=some sender value sent a message"
, while also creating an index in Elasticsearch for sender_id
to make it easier to search for a particular sender_id in Kibana.
Note: Make sure that the key
for MDC
and Structured Arguments
is in snake_case (and lower case)
as suggested by Elasticsearch's Naming Conventions. These naming conventions make it easy to search for those keys in Kibana
as the keys will be optimized by Elastic Search.
Contributing to this library
Please see the Contribution Guidelines.
Running tests
mvn clean verify
Running example projects
Don't know how to start, have a look at these examples
Bump Version For Release
Run the following bash command and commit the change:
bash build/bump_version.sh MAJOR|MINOR|PATCH
Example:
bash build/bump_version.sh MINOR