Why Logging & Monitoring?
Imagine driving a car with no dashboard - no speedometer, no fuel gauge, no warning lights. You'd have no idea how fast you're going or when you're about to run out of gas until it's too late.
Logging and monitoring are your application's dashboard. They tell you what's happening inside, help you find problems, and alert you before things break.
Logging
Records what happened - like a flight recorder. "User John logged in at 10:15", "Payment failed for order #123".
Monitoring
Shows what's happening NOW - like a dashboard. CPU usage, memory, response times, error rates.
Alerting
Notifies you when something's wrong - like warning lights. "Error rate above 5%", "Memory usage critical".
Debugging
Helps find the root cause - like a detective. Trace a request through your system to find where it failed.
Logging with SLF4J and Logback
SLF4J (Simple Logging Facade for Java) is the standard logging API. Logback is the most popular implementation. Spring Boot uses both by default.
Basic Logging
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class UserService {
// Create a logger for this class
private static final Logger logger = LoggerFactory.getLogger(UserService.class);
public User createUser(String email, String name) {
logger.info("Creating user with email: {}", email);
try {
User user = new User(email, name);
userRepository.save(user);
logger.info("User created successfully: id={}", user.getId());
return user;
} catch (Exception e) {
logger.error("Failed to create user: email={}", email, e);
throw e;
}
}
public User findUser(Long id) {
logger.debug("Looking up user with id: {}", id);
User user = userRepository.findById(id).orElse(null);
if (user == null) {
logger.warn("User not found: id={}", id);
}
return user;
}
}
Log Levels (From Most to Least Verbose)
logger.trace("Very detailed info for tracing"); // Rarely used
logger.debug("Debugging information"); // Development only
logger.info("Normal operation info"); // Default for production
logger.warn("Something unexpected happened"); // Potential problems
logger.error("Something failed", exception); // Errors that need attention
When to Use Each Level
- TRACE - Method entry/exit, loop iterations (rarely needed)
- DEBUG - Variable values, internal state (development)
- INFO - User actions, business events (production)
- WARN - Recoverable problems, deprecated usage
- ERROR - Failed operations, exceptions
Logback Configuration
Create src/main/resources/logback-spring.xml:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!-- Console output -->
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- File output with daily rotation -->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/application.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/application.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>30</maxHistory>
<totalSizeCap>1GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- JSON format for log aggregation -->
<appender name="JSON" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/application.json</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/application.%d{yyyy-MM-dd}.json</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<!-- Set log levels for packages -->
<logger name="com.myapp" level="DEBUG"/>
<logger name="org.springframework" level="INFO"/>
<logger name="org.hibernate.SQL" level="DEBUG"/>
<!-- Root level -->
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
</root>
</configuration>
Structured Logging
Instead of plain text logs, use structured data that's easy to search and analyze.
// BAD: Hard to parse and search
logger.info("User john@email.com placed order #123 for $99.99");
// GOOD: Structured with MDC (Mapped Diagnostic Context)
import org.slf4j.MDC;
public class OrderService {
public void placeOrder(User user, Order order) {
// Add context that appears in every log message
MDC.put("userId", user.getId().toString());
MDC.put("orderId", order.getId().toString());
MDC.put("requestId", UUID.randomUUID().toString());
try {
logger.info("Order placed: amount={}, items={}",
order.getTotal(), order.getItems().size());
processPayment(order);
sendConfirmation(order);
logger.info("Order completed successfully");
} finally {
MDC.clear(); // Always clean up!
}
}
}
// Update logback pattern to include MDC
// %X{userId} %X{orderId} %X{requestId}
Request Tracing with Filters
@Component
public class RequestLoggingFilter extends OncePerRequestFilter {
private static final Logger logger = LoggerFactory.getLogger(RequestLoggingFilter.class);
@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response,
FilterChain filterChain) throws ServletException, IOException {
String requestId = UUID.randomUUID().toString().substring(0, 8);
long startTime = System.currentTimeMillis();
MDC.put("requestId", requestId);
MDC.put("method", request.getMethod());
MDC.put("path", request.getRequestURI());
try {
logger.info("Request started");
filterChain.doFilter(request, response);
} finally {
long duration = System.currentTimeMillis() - startTime;
MDC.put("status", String.valueOf(response.getStatus()));
MDC.put("duration", duration + "ms");
logger.info("Request completed");
MDC.clear();
}
}
}
Spring Boot Actuator
Actuator provides production-ready monitoring endpoints out of the box.
Setup
<!-- pom.xml -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
# application.properties management.endpoints.web.exposure.include=health,info,metrics,prometheus,loggers management.endpoint.health.show-details=when_authorized management.endpoint.health.probes.enabled=true # Custom info info.app.name=My Application info.app.version=@project.version@ info.app.description=My awesome application
Available Endpoints
GET /actuator/health # Application health status
GET /actuator/info # Application information
GET /actuator/metrics # All available metrics
GET /actuator/prometheus # Prometheus format metrics
GET /actuator/loggers # View/change log levels
POST /actuator/loggers/com.myapp # Change log level at runtime
# Example: Change log level without restart
curl -X POST http://localhost:8080/actuator/loggers/com.myapp \
-H "Content-Type: application/json" \
-d '{"configuredLevel": "DEBUG"}'
Custom Health Indicators
@Component
public class DatabaseHealthIndicator implements HealthIndicator {
@Autowired
private DataSource dataSource;
@Override
public Health health() {
try (Connection conn = dataSource.getConnection()) {
if (conn.isValid(1)) {
return Health.up()
.withDetail("database", "PostgreSQL")
.withDetail("status", "Connected")
.build();
}
} catch (SQLException e) {
return Health.down()
.withDetail("error", e.getMessage())
.build();
}
return Health.down().build();
}
}
// Response:
// {
// "status": "UP",
// "components": {
// "database": {
// "status": "UP",
// "details": {
// "database": "PostgreSQL",
// "status": "Connected"
// }
// }
// }
// }
Metrics with Prometheus & Grafana
Prometheus collects metrics, Grafana visualizes them in beautiful dashboards.
Setup Micrometer (Prometheus Integration)
<!-- pom.xml -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
Custom Metrics
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
@Service
public class OrderService {
private final Counter ordersCounter;
private final Counter failedOrdersCounter;
private final Timer orderProcessingTimer;
public OrderService(MeterRegistry registry) {
// Count total orders
this.ordersCounter = Counter.builder("orders.total")
.description("Total number of orders")
.register(registry);
// Count failed orders
this.failedOrdersCounter = Counter.builder("orders.failed")
.description("Number of failed orders")
.register(registry);
// Time order processing
this.orderProcessingTimer = Timer.builder("orders.processing.time")
.description("Time to process orders")
.register(registry);
}
public Order processOrder(OrderRequest request) {
return orderProcessingTimer.record(() -> {
try {
Order order = createOrder(request);
ordersCounter.increment();
return order;
} catch (Exception e) {
failedOrdersCounter.increment();
throw e;
}
});
}
}
Prometheus Configuration
# prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'spring-boot-app'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['localhost:8080']
# Run Prometheus with Docker:
# docker run -p 9090:9090 -v ./prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
Key Metrics to Monitor
# JVM Metrics (automatic) jvm_memory_used_bytes jvm_gc_pause_seconds jvm_threads_live_threads # HTTP Metrics (automatic) http_server_requests_seconds_count http_server_requests_seconds_sum http_server_requests_seconds_max # Database Metrics hikaricp_connections_active hikaricp_connections_pending # Custom Business Metrics orders_total orders_failed orders_processing_time_seconds
Centralized Logging with ELK Stack
ELK (Elasticsearch, Logstash, Kibana) collects logs from all your services in one place.
Your Apps → Logstash → Elasticsearch → Kibana
↑
(Collects & (Stores & (Visualizes &
processes) indexes) searches)
Logback Configuration for ELK
<!-- pom.xml -->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.4</version>
</dependency>
<!-- logback-spring.xml -->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>logstash:5000</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"app":"myapp","env":"production"}</customFields>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="LOGSTASH"/>
</root>
Logging Best Practices
1. Don't Log Sensitive Data
// BAD: Logging passwords, credit cards
logger.info("User login: email={}, password={}", email, password);
// GOOD: Never log sensitive information
logger.info("User login attempt: email={}", email);
2. Use Parameterized Logging
// BAD: String concatenation (always evaluates)
logger.debug("User data: " + user.toString());
// GOOD: Parameters (only evaluated if level is enabled)
logger.debug("User data: {}", user);
3. Log Actionable Information
// BAD: Not helpful
logger.error("Error occurred");
// GOOD: Actionable
logger.error("Failed to send email to {} after {} retries: {}",
email, retryCount, exception.getMessage(), exception);
4. Include Context
// Always include enough context to debug
logger.info("Order processed: orderId={}, userId={}, amount={}, items={}",
order.getId(), user.getId(), order.getTotal(), order.getItems().size());
5. Don't Over-Log
// BAD: Logging in tight loops
for (Item item : items) {
logger.info("Processing item: {}", item); // 10,000 log entries!
}
// GOOD: Log summary
logger.info("Processing {} items", items.size());
// ... process items ...
logger.info("Processed {} items successfully", successCount);