log-platform
Full EFK platform for logs and monitoring. EFK stands for :
More on logs unification
On tracing using :
Note : OpenTelemetry will be replacing OpenTracing, nevertheless it is not ready and stable as of now. Expect also implementation to be fully stable before migrating to the latest version of Jaeger.
Guidelines
Structured logs
Logs used to be a long chain of words and events, requiring a human to read and interpret.
With growing volume of logs, we need to give a structure to logs to allow machine involvement in crunching and organizing data easy to identify & aggregate.
By definition, each event data model depends on business (what you try to achieve), but here are a set of technical fields that are required for every logs to have a context & foster deeper analysis.
Execution context location
Allow to tag every logs sent to EFK with following information :
Field name | Definition | Example | Default value |
---|---|---|---|
REGION | Physical location | US_East_A, CN_SHA, .. | SINGLE |
ZONE | Logical location | ZONE_A, ZONE_25, .. | SINGLE |
MACHINE_ID | Specific virtualized ID (like Docker ID) | 239411039fee, 8b9a6863c720, .. | UNKNOWN |
SERVICE_NAME | Business component name | UserGatewayService, .. | UNKNOWN |
VERSION_TAG | Specific version or tag | service-a:0.0.1-SNAPSHOT | UNKNOWN |
Distributed tracing
Field name | Definition | Example | Default value |
---|---|---|---|
TRACE_ID | Unique ID per Request | 558907019132e7f8, .. | [NULL] |
- Trace ID : 558907019132e7f8, ..
- Specific Keys (logs) : TXN-123567, PERSIST-67890, ..
- Business Data (logs) : username, ..
StructuredLogger
Logs
Allow to create new dimension in ElasticSearch. Initialize the logger similar with Slf4j LOGGER :
final static StructuredLogger STRUCTURED_LOGGER = StructuredLogger.create("usage");
Then use it for logging String or Integer values :
STRUCTURED_LOGGER.info(entry("key1", "value1"), entry("key2", "value2"));
STRUCTURED_LOGGER.info(entry("key1", 123), entry("key2", 456));
Gives a JSON log :
{"key1":"value1","key2":"value2"}
{"key1":123,"key2":456}
RpcLogger
Implementation to logs RPC calls, in a generic way :
- RESTful
- GraphQL
- etc..
GraphQL sample
Logging API errors :
rpcLogger.warn(client(),
method("query"),
uri("/HeroNameAndFriends"),
statusCode("123"),
errorMessage("The invitation has expired, please request a new one")
);
Gives a log :
- kind : client | server
- method : query | mutation | subscription
- uri : String
- response_code
{
"kind":"client",
"method":"Query",
"uri":"/HeroNameAndFriends",
"response_code":"123",
"error_message":"The invitation has expired, please request a new one"
}
Full guide for Structured Logging
Full guide for Structured Logging
Adoption
Import using :
<dependency>
<groupId>com.github.frtu.logs</groupId>
<artifactId>logger-core</artifactId>
<version>${frtu-logger.version}</version>
</dependency>
Check the latest version (clickable) :
Configure logback
- Choose any template from
logger-core/src/main/resources/templates/
- Copy into your
src/main/resources/
folder
fluentd in logback-spring.xml
When using logback-spring.xml, you can override any logback ENV with Spring properties using :
<springProperty scope="context" name="SERVICE_NAME" source="application.name"/>
In your property file, just configure fluentd
- tag.label
- region
- zone
- etc.
fluentd.tag=tag
fluentd.label=label
logging.region=localhost
logging.zone=zone
logging.path=target/
file appender in logback-spring.xml
For Production & avoid message loss, it is recommended to use log file + fluentd tail (instead of streaming log) to allow local buffering.
Define log file location with system env :
$LOG_PATH/$SERVICE_NAME.log
- or
LOG_FILE_LOCATION
In your application properties or yaml :
logging.path=target/
Also configure RollingFileAppender using :
<property name="LOG_FILE_MAX_SIZE" value="${LOG_FILE_MAX_SIZE:-5MB}"/>
<property name="LOG_FILE_MAX_HISTORY" value="${LOG_FILE_MAX_HISTORY:-15}"/>
<property name="LOG_FILE_MAX_TOTAL_SIZE" value="${LOG_FILE_MAX_SIZE:-100MB}"/>
Log forwarder
Enablement
Import logback configuration from templates folder for :
- Standalone application : logback.xml
- Spring-Boot application (including profiles) : logback-spring.xml
For troubleshooting, add the import to flush fluentd config into log :
@ComponentScan(basePackages = {"com.github.frtu.logs.infra.fluentd", "..."})
Usage
Just log with logback, activate FLUENT appender on Staging or Production.
a) Core Tracer API
Enablement
If you only need Jaeger io.opentracing.Tracer, just add :
@ComponentScan(basePackages = {"com.github.frtu.logs.tracing.core", "..."})
Usage
You can create a single Span structure :
Span span = tracer.buildSpan("say-hello1").start();
LOGGER.info("hello1");
span.finish();
OR a node from a graph using Scope :
try (Scope scope = tracer.buildSpan("say-hello2").startActive(true)) {
LOGGER.info("hello2");
}
- See sample-microservices/service-a or ChangeList
- Or more at opentracing.io - span
b) @ExecutionSpan AOP
Enablement
If you want to use @ExecutionSpan to mark a method to create Span, add :
@ComponentScan(basePackages = {"com.github.frtu.logs.tracing", "..."})
And add Spring AOP :
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-aop</artifactId>
</dependency>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjweaver</artifactId>
</dependency>
OR spring-boot AOP :
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
Basic usage
Just annotate with @ExecutionSpan all the methods you need to create a DAG :
@ExecutionSpan
public String method() {}
You can optionally add a Spring property to get a full classname trace :
trace.full.classname=true
See sample-microservices/service-b or ChangeList
Tag & Log enrichment
To add Tag use :
@ExecutionSpan({
@Tag(tagName = "key1", tagValue = "value1"),
@Tag(tagName = "key2", tagValue = "value2")
})
public void method() {}
To add Log use :
@ExecutionSpan
public String method(@ToLog("paramName") String param) {}
Manually add Span.log
Use spring @Autowired to get instance of com.github.frtu.logs.tracing.core.TraceHelper :
@Autowired
private TraceHelper traceHelper;
void method() {
traceHelper.addLog("log1", "value1");
}
Context passing
Dev local
When starting an standalone Main class, also add the following to VM options :
-DREGION=FR -DZONE=A -DSERVICE_NAME=service-a -DMACHINE_ID=982d2ff1686a -DVERSION_TAG=service-a:0.0.1-SNAPSHOT
Also add Jaeger Configuration for :
- HTTP :
-DJAEGER_ENDPOINT=http://localhost:14268/api/traces
- Agent :
-DJAEGER_AGENT_HOST=localhost -DJAEGER_AGENT_PORT=6831
Inside container & docker-compose
Go to folder /sample-microservices/ and docker-compose up
Metrics
Adoption
Import the JAR
<dependency>
<groupId>com.github.frtu.logs</groupId>
<artifactId>logger-metrics</artifactId>
<version>${frtu-logger.version}</version>
</dependency>
Check the latest version (clickable) :
Spring Annotation
Import Spring Configuration :
@Import({MetricsConfig.class, ...})
Spring Properties
# =================================
# Metrics related configurations
# =================================
# https://www.callicoder.com/spring-boot-actuator/
management.endpoints.web.exposure.include=*
management.endpoint.health.show-details=always
management.endpoint.metrics.enabled=true
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true
Custom measurement
This library provide a class to abtract from direct Counter & Timer :
- com.github.frtu.metrics.micrometer.model.Measurement
final Iterable<Tag> tags = ...;
final Measurement measurement = new Measurement(registry, operationName);
measurement.setOperationDescription(operationDescription);
measurement.setTags(tags);
try (MeasurementHandle handle = new MeasurementHandle(measurement)) {
return joinPoint.proceed();
} catch (Throwable ex) {
throw MeasurementHandle.flagError(ex);
}
Infrastructure
More details at GuidelineMonitoring.md
Tools & Tips
Runtime changing the level
Dynamically changing spring-boot application logs LEVEL
- Using Actuator => org.springframework.boot:spring-boot-starter-actuator
management.endpoints.web.exposure.include=loggers
management.endpoint.loggers.enabled=true
Can also use the shell scripts at bash-fwk/lib-dev-spring
Operation tools
Check Tools module.
Infrastructure
Details for development & production env
With Docker Compose (dev local)
- EFK Docker Compose : Using Elastic Search & Kibana OSS (Open source under Apache 2.0 license)
URLs
Monitoring
- Grafana : http://localhost:3000/
- Prometheus : http://localhost:9090/
- Prometheus Targets : http://localhost:9090/targets
Distributed Tracing :
- Jaeger : http://localhost:16686/
Logging :
- Kibana : http://localhost:5601/
Tools :
- Spring Admin : http://localhost:8888/
With K8S (production)
Using EFK
Log sources
Simple HTTP source (test)
- Send data using http://localhost:9880/myapp.access?json={"event":"data"}
From Docker instances
Log access from Httpd or Apache
Java log library
fluentd provide a dedicated java logger but for better integration through SLF4J it is recommended to use an adapter to logback :
See also
- Get familiar with the concepts with Observability 3 ways: Logging, Metrics & Tracing by Adrian Cole
- opentelemetry-beyond-getting-started