Friday, November 27, 2020

Oracle Table Partitioning / Local scope vs Global scope

Partitioning in Database is very important for system performances for critical applications. This improves overall Database and Application level maintainability and manageability. In Database partitioning, Database administrators can define smaller pieces of Database objects under different partition types. In application level, programmers can focus on specific Database objects rather than the entire Database object. This approach helps to make high performance and terabyte level applications. 



Figure 1 - Non partitioned table vs partitioned table

Global scope partitioning

Main disadvantage of this type of partitioning is, once we remove old/unnecessary partitions then we have to rebuild entire Database. So this required complete system downtime


Local scope partitioning

Main advantage of this approach is, once we remove old/unnecessary partition then we don't need to rebuild entire Database objects and no system downtime is required. In this approach we have to manually create relevant partitions.  


Ex : Local scope partitioning ( Manual approach )

Note : This table  has primary key called 'LOG_ID' and lets create partitions based on date time. Data of previous month will be stored in relevant partition.


CREATE TABLE "TBL_MESSAGE" (
"LOG_ID" VARCHAR2(100 BYTE), 
"ARRIVAL_DATETIME" TIMESTAMP (6), 
"PAYMENT_DATE" TIMESTAMP (6), 
"TRANSACTION_REFERRENCE" VARCHAR2(100 BYTE), 
"PAYMENT_AMOUNT" VARCHAR2(100 BYTE), 
"TX_STATUS_DESCRIPTION" VARCHAR2(2000 BYTE), 
PRIMARY KEY ("LOG_ID")
  )
   PARTITION BY RANGE (ARRIVAL_DATETIME)(
PARTITION message_01122020 VALUES LESS THAN (TO_DATE('01/12/2020', 'DD/MM/YYYY')) TABLESPACE users,
PARTITION message_01012021 VALUES LESS THAN (TO_DATE('01/01/2021', 'DD/MM/YYYY')) TABLESPACE users,
PARTITION message_01022021 VALUES LESS THAN (TO_DATE('01/02/2021', 'DD/MM/YYYY')) TABLESPACE users,
PARTITION message_01032021 VALUES LESS THAN (TO_DATE('01/03/2021', 'DD/MM/YYYY')) TABLESPACE users,
PARTITION message_01042021 VALUES LESS THAN (TO_DATE('01/04/2021', 'DD/MM/YYYY')) TABLESPACE users,
PARTITION message_01052021 VALUES LESS THAN (TO_DATE('01/05/2021', 'DD/MM/YYYY')) TABLESPACE users,
PARTITION message_01062021 VALUES LESS THAN (TO_DATE('01/06/2021', 'DD/MM/YYYY')) TABLESPACE users);

   

Note: We have to create partition indexing on specific table column. We have to decided this column which is based on most frequent database queries.

CREATE INDEX index_tbl_message ON TBL_MESSAGE (ARRIVAL_DATETIME) LOCAL (

PARTITION message_01122020 TABLESPACE users,

PARTITION message_01012021 TABLESPACE users,

PARTITION message_01022021 TABLESPACE users,

PARTITION message_01032021 TABLESPACE users,

PARTITION message_01042021 TABLESPACE users,

PARTITION message_01052021 TABLESPACE users,

PARTITION message_01062021 TABLESPACE users);



Ex : Local scope partitioning ( Automatic approach )

Note : In this approach partitions will be created automatically in each month

CREATE TABLE "TBL_MESSAGE_AUTO" (
"LOG_ID" VARCHAR2(100 BYTE), 
"ARRIVAL_DATETIME" TIMESTAMP (6), 
"PAYMENT_DATE" TIMESTAMP (6), 
"TRANSACTION_REFERRENCE" VARCHAR2(100 BYTE), 
"PAYMENT_AMOUNT" VARCHAR2(100 BYTE), 
"TX_STATUS_DESCRIPTION" VARCHAR2(2000 BYTE), 
PRIMARY KEY ("LOG_ID")
  )
PARTITION BY RANGE (ARRIVAL_DATETIME)
INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
   PARTITION message_01122020 VALUES LESS THAN (TO_DATE('01-12-2020', 'DD-MM-YYYY')),
   PARTITION message_01012021 VALUES LESS THAN (TO_DATE('01-01-2021', 'DD-MM-YYYY'))
); 

CREATE INDEX index_tbl_message_auto ON TBL_MESSAGE_AUTO (ARRIVAL_DATETIME) LOCAL (
PARTITION message_01122020 TABLESPACE users,
PARTITION message_01012021 TABLESPACE users);

Sunday, November 15, 2020

SSH/SCP/SFTP/RSYNC with/without Jump servers





Direct SSH over the terminal : This is to connect remote server terminal directly.

ssh -l <username> <destination_ip>

Ex: ssh -l johnremote 172.33.78.198

It will ask for the password

Secure File transfer directly with SCP

scp <your-file> destination-server-user>@<destination-server-ip>

Ex : scp test.txt johnremote@172.33.78.198:/home

It will ask for the password


Secure File transfer directly with SFTP

in progress....

Secure File transfer directly with RSYNC

in progress....


SSH via Jump Server over the terminal : This is to connect remote server terminal directly.

ssh  -J  <jump-server-username>@<jump-server-ip>  -i  <ssh-key-file>  <destination-server-user>@<destination-server-ip>

Ex: ssh  -J  johnjump@172.45.68.167  -i  john-private-key  johnremote@172.33.78.199

SSH-Key-File generation

Your administrator will be requested you to share your SSH public key file. So you have to generate public and private key pairs using bellow command.

Note : Encryption algorithm and key file size will be informed by administrator and remember the password given during key files generation

ssh-keygen -t rsa -b 4096


Secure File transfer with jump server using SCP

scp -o 'ProxyJump <jump-server-username>@<jump-server-ip>' <file-to-be-trasfer> <destination-server-user>@<destination-server-ip>:<destination-server-path>

Ex: scp -o 'ProxyJump john@172.45.68.167' test.txt johnremote@172.33.78.199:/home


Secure File transfer with jump server using SFTP

in progress....

Secure File transfer with jump server using RSYNC

in progress....

Thursday, October 8, 2020

IBM AIX CPU and Memory monitoring

 Command to monitor CPU every second

Command: sudo sar 1 3600

AIX mymachine 1 6 00006EFAD400    10/09/20

System configuration: lcpu=8  mode=Capped 

12:18:23    %usr    %sys    %wio   %idle   physc

12:18:24       2       1       0      97    3.99

12:18:25       1       0       0      99    4.00

12:18:26       0       0       0      99    4.01

12:18:27       0       0       0      99    4.03

12:18:28       0       0       0      99    3.99

12:18:29       0       0       0      99    4.01

12:18:30       1       0       0      99    4.00

12:18:31       0       0       0      99    4.01



 Command to monitor Memory every second

Command : vmstat 1 3600

System Configuration: lcpu=8 mem=15808MB

kthr    memory              page              faults        cpu    
----- ----------- ------------------------ ------------ -----------
 r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa
 1  0 1885399 1576810   0   0   0   0    0   0 239 2466 1267  1  0 99  0
 3  0 1885399 1576806   0   0   0   0    0   0 263 1953 1245  1  0 99  0

Wednesday, April 22, 2020

Containerizing your apps 3 - Multiple service integration with docker compose

The previous post, explained how to run a simple REST service with the Docker environment. Deploying a single entity in the Docker environment is not the purpose of using docker. Docker is a container orchestration platform that can address auto-scalable, fault-tolerant features that cannot be addressed in monolithic models. Bellow diagram includes spring boot based microservice components including Spring cloud gateway, eureka service, config service, and app service. Docker will expose 8080 port to external connectivity so the client can send requests on this port.


Cloud components in bellow diagram

1. Docker - Container orchestration framework
2. Host - Machine that installed docker
3. Eureka service - Keeps service information deployed services including IP, Ports, scaling information
4. Config service - Connects with config repositories
5. App service - This has custom logic that serves external requests came from cloud gateway
6. Spring Cloud Gateway - This receives requests from the host's client app and route it and load balance to each app service. This service will expose host system via 8080 port


                                       
                                         Figure 1 - Scaled microservice with API Gateway

In order to set up and deploy the above solution, we required setup the Docker compose file.


What is the Docker compose?

1. It is a YAML file
2. Includes a group of Docker files of each service
3. Has service dependencies ( service A depends on Eureka service and Config service )
4. Has startup order ( service A will start after Eureka and Config service )
5. Has an internal network ( How each service links )
6. Has scaleling port ranges of each service


List of files to setup above services

Spring BOOT Jar Files 

1. config-service.jar

Code

@SpringBootApplication
@EnableConfigServer
public class ConfigApplication {

public static void main(String[] args) {

SpringApplication.run(ConfigApplication.class, args);
}

}

Major Dependencies

spring-boot-starter-web
spring-boot-starter-actuator
spring-cloud-config-server


2. eureka-service.jar

Code

@SpringBootApplication
@EnableEurekaServer
public class EurekaApp {

public static void main(String[] args) {

SpringApplication.run(EurekaApp.class, args);
}

}

Major Dependencies

spring-boot-starter-web
spring-boot-starter-actuator
spring-cloud-netflix-eureka-server



3. app-service.jar

Code ( It has the main method and rest endpoint return the node's port. So we can check load balancing)

@SpringBootApplication
@EnableDiscoveryClient
@EnableAutoConfiguration
public class AppService {

public static void main(String[] args) {

SpringApplication.run(AppService.class, args);
}

}

@Controller
@RestController
@ControllerAdvice
public class RestService{

@GetMapping(value = "/testAPI", produces = "application/json")
public ResponseEntity testAPI(@RequestParam Map receiptParams,HttpServletRequest request) {
CommonResponse response = new CommonResponse();
response.setStatusCode("1");
response.setStatusDesc("Success. Request received. Node details : port="+env.getProperty("local.server.port"));
ResponseEntity responseEntity = new ResponseEntity<>(response, HttpStatus.OK);
return responseEntity;
}

}

@Data
public class CommonResponse {

private String statusCode;
private String statusDesc;

}


Major Dependencies

spring-boot-starter-web
spring-cloud-starter-netflix-eureka-client
spring-boot-starter-actuator
spring-cloud-starter-config


4. spring-gateway-client-service.jar


Code( This has router and load balancer, lb is for load balancing to app-service which is mentioned in the docker compose )

@SpringBootApplication
@RestController
@EnableDiscoveryClient
@EnableAutoConfiguration
public class Application {

public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}

@Bean
public RouteLocator myRoutes(RouteLocatorBuilder builder) {
return builder.routes().route(p -> p.path("/testAPI/**").filters(f -> f.stripPrefix(1)).uri("lb://app-service/")).build();

}

}

Major Dependencies

spring-cloud-starter-gateway
spring-cloud-starter-netflix-eureka-client
spring-cloud-starter-config


Docker Files

1. Dockerfile.configservice

FROM openjdk:13-alpine
EXPOSE 8888
ADD config-service.jar ./config-service.jar
ENTRYPOINT ["java","-jar","-Duser.timezone=GMT+0530","config-service.jar"]

2. Dockerfile.eurekaservice

FROM openjdk:13-alpine
EXPOSE 8080
RUN apk --no-cache add netcat-openbsd
COPY check-entrypoint-eureka.sh ./check-entrypoint-eureka.sh
ADD eureka-service.jar ./eureka-service.jar
RUN chmod 755 check-entrypoint-eureka.sh

We include check-entrypoint-eureka.sh checks whether config service is up and running. Eureka service will be started

3. Dockerfile.appservice

FROM openjdk:13-alpine
EXPOSE 8080
RUN apk --no-cache add netcat-openbsd
COPY check-entrypoint-app-service.sh ./check-entrypoint-app-service.sh
ADD app-service.jar ./app-service.jar
RUN chmod 755 check-entrypoint-app-service.sh

We include check-entrypoint-app-service.sh checks whether config service and eureka services are up and running. Then only app-service will be started

4. Dockerfile.gatewayclient

FROM openjdk:13-alpine
EXPOSE 8090
RUN apk --no-cache add netcat-openbsd
COPY check-entrypoint-gatewayclient.sh ./check-entrypoint-gatewayclient.sh
ADD spring-gateway-client-service.jar ./spring-gateway-client-service.jar
RUN chmod 755 check-entrypoint-gatewayclient.sh

We include check-entrypoint-gatewayclient.sh checks whether config service and eureka service, and app services are up and running. Then gateway client will be started


Linux script files ( This is to check dependent services are up and running )

These Linux scripts run from the container itself. Container build by docker container knows how to access other services by service name and port


check-entrypoint-eureka.sh

#!/bin/sh
while ! nc -z config-service 8888 ; do
    echo "<------------ span="" style="background-color: #ffd966;">Euerka service is waiting for upcoming Config service
---------------->"    sleep 2
done
java -jar eureka-service.jar


check-entrypoint-app-service.sh

#!/bin/sh
while ! nc -z config-service 8888 ; do
    echo "<------------ span="" style="background-color: #ffd966;">App service is waiting for upcoming Config service
---------------->"    sleep 2
done
while ! nc -z eureka-service 8761 ; do
    echo "<------------ span="" style="background-color: #ffd966;">App service is waiting for upcoming Eureka service
---------------->"    sleep 2
done
java -jar app-service.jar


check-entrypoint-gatewayclient.sh

#!/bin/sh
while ! nc -z config-service 8888 ; do
    echo "<------------ span="" style="background-color: #ffd966;">App service is waiting for upcoming Config service
---------------->"    sleep 2
done
while ! nc -z eureka-service 8761 ; do
    echo "<------------ span="" style="background-color: #ffd966;">App service is waiting for upcoming Eureka service
---------------->"    sleep 2
done
while ! nc -z app-service 8080 ; do
    echo "<------------ span="" style="background-color: #ffd966;">App service is waiting for upcoming App service
---------------->"    sleep 2
done

java -jar spring-gateway-client-service.jar



Docker Compose YML


version: '2'
services:
    config-service:
        container_name: config-service
        build:
            context: .
            dockerfile: Dockerfile.configservice
        image: config-service:latest
        expose:
            - 8888
        ports:
            - 8888:8888
        networks:
            - spring-cloud-network
        volumes:
            - spring-cloud-config-repo:/var/lib/spring-cloud/config-repo
        logging:
            driver: json-file
    eureka-service:
        container_name: eureka-service
        build:
            context: .
            dockerfile: Dockerfile.eurekaservice
        image: eureka-service:latest
        expose:
            - 8761
        ports:
            - 8761:8761
        networks:
            - spring-cloud-network
        links:
            - config-service:config-service
        depends_on:
            - config-service
        command: './check-entrypoint-eureka.sh'
        logging:
            driver: json-file
    app-service:
        build:
            context: .
            dockerfile: Dockerfile.appservice
        image: app-service:latest

        ports:
            - "8080"
        networks:
            - spring-cloud-network
        links:
            - config-service:config-service
            - eureka-service:eureka-service
        depends_on:
            - config-service
            - eureka-service
        command: './check-entrypoint-app-service.sh'
        logging:
            driver: json-file

    spring-gateway-client-service:
        build:
            context: .
            dockerfile: Dockerfile.gatewayclient
        image: spring-gateway-client-service:latest

        expose:
            - 8090
        ports:
            - 8090:8090
        networks:
            - spring-cloud-network
        links:
            - eureka-service:eureka-service
            - app-service:app-service
        depends_on:
            - eureka-service
            - app-service
        command: './check-entrypoint-gatewayclient.sh'
        logging:
            driver: json-file
networks:
    spring-cloud-network:
        driver: bridge
volumes:
    spring-cloud-config-repo:
        external: true



How to build docker compose

command: sudo docker-compose build

output :
 
Building config-service
Step 1/4 : FROM openjdk:13-alpine
 ---> c4b0433a01ac
Step 2/4 : EXPOSE 8888
 ---> Using cache
 ---> 326abacc404e
Step 3/4 : ADD config-service.jar ./config-service.jar
 ---> Using cache
 ---> 3db14ff99098
Step 4/4 : ENTRYPOINT ["java","-jar","-Duser.timezone=GMT+0530","config-service.jar"]
 ---> Using cache
 ---> c566948166a0
Successfully built c566948166a0
Successfully tagged config-service:latest
Building eureka-service
Step 1/6 : FROM openjdk:13-alpine
 ---> c4b0433a01ac
Step 2/6 : EXPOSE 8080
 ---> Using cache
 ---> e638ee32b31e
Step 3/6 : RUN apk --no-cache add netcat-openbsd
 ---> Using cache
 ---> 879e12e2b629
Step 4/6 : COPY check-entrypoint-eureka.sh ./check-entrypoint-eureka.sh
 ---> Using cache
 ---> dfcd25b502ec
Step 5/6 : ADD eureka-service.jar ./eureka-service.jar
 ---> Using cache
 ---> 44c826f61f35
Step 6/6 : RUN chmod 755 check-entrypoint-eureka.sh
 ---> Using cache
 ---> 660062ba9463
Successfully built 660062ba9463
Successfully tagged eureka-service:latest
Building app-service
Step 1/6 : FROM openjdk:13-alpine
 ---> c4b0433a01ac
Step 2/6 : EXPOSE 8080
 ---> Using cache
 ---> e638ee32b31e
Step 3/6 : RUN apk --no-cache add netcat-openbsd
 ---> Using cache
 ---> 879e12e2b629
Step 4/6 : COPY check-entrypoint-app-service.sh ./check-entrypoint-app-service.sh
 ---> Using cache
 ---> 638c51bfdba2
Step 5/6 : ADD app-service.jar ./app-service.jar
 ---> Using cache
 ---> 1f9710fefe1e
Step 6/6 : RUN chmod 755 check-entrypoint-app-service.sh
 ---> Using cache
 ---> 0557299d4fcf
Successfully built 0557299d4fcf
Successfully tagged app-service:latest
Building spring-gateway-client-service
Step 1/6 : FROM openjdk:13-alpine
 ---> c4b0433a01ac
Step 2/6 : EXPOSE 8090
 ---> Using cache
 ---> eab248ba6443
Step 3/6 : RUN apk --no-cache add netcat-openbsd
 ---> Using cache
 ---> 72051b97ed40
Step 4/6 : COPY check-entrypoint-gatewayclient.sh ./check-entrypoint-gatewayclient.sh
 ---> Using cache
 ---> 00dfde72cadf
Step 5/6 : ADD spring-gateway-client-service.jar ./spring-gateway-client-service.jar
 ---> Using cache
 ---> 2fb78b235394
Step 6/6 : RUN chmod 755 check-entrypoint-gatewayclient.sh
 ---> Using cache
 ---> c7126edc4109
Successfully built c7126edc4109
Successfully tagged spring-gateway-client-service:latest


How to start services with scaling to app-service

command : sudo docker-compose up -d --scale app-service=4 --remove-orphans

output: ( You can see services are running based on dependency first approach, Green colored areas show how scale nodes are running )

Starting config-service ...
Starting config-service ... done
Starting eureka-service ...
Starting eureka-service ... done
Starting tmp_app-service_1 ...
Starting tmp_app-service_2 ...
Starting tmp_app-service_3 ...
Starting tmp_app-service_4 ...
Starting tmp_app-service_1 ... done
Starting tmp_app-service_2 ... done
Starting tmp_app-service_3 ... done
Starting tmp_app-service_4 ... done
Starting tmp_spring-gateway-client-service_1 ...
Starting tmp_spring-gateway-client-service_1 ... done


Eureka service registration

















You can see there are 4 nodes are up and running from app-service. It shows the port is 8080 for all nodes. Docker can differentiate these nodes as separate container ids are assigned.


Sending multiple requests

command:

curl http://localhost:8090/testAPI/get/
curl http://localhost:8090/testAPI/get/
curl http://localhost:8090/testAPI/get/
curl http://localhost:8090/testAPI/get/
curl http://localhost:8090/testAPI/get/
curl http://localhost:8090/testAPI/get/

outputs : ( Load balancing works )

Success. Request received. Node details : port=8080
Success. Request received. Node details : port=8081
Success. Request received. Node details : port=8082
Success. Request received. Node details : port=8083
Success. Request received. Node details : port=8080
Success. Request received. Node details : port=8081

How to stop services 


command: sudo docker-compose down

output : ( Shutting down started from service has no external dependency, config-service is providing dependency for all modules, so it will be shut down in the last stage  )

Stopping tmp_spring-gateway-client-service_1 ... done
Stopping tmp_app-service_4                   ... done
Stopping tmp_app-service_3                   ... done
Stopping tmp_app-service_2                   ... done
Stopping tmp_app-service_1                   ... done
Stopping eureka-service                      ... done
Stopping config-service                      ... done
Removing tmp_spring-gateway-client-service_1 ... done
Removing tmp_app-service_4                   ... done
Removing tmp_app-service_3                   ... done
Removing tmp_app-service_2                   ... done
Removing tmp_app-service_1                   ... done
Removing eureka-service                      ... done
Removing config-service                      ... done
Removing network tmp_spring-cloud-network

Tuesday, April 7, 2020

Containerizing your apps 2 - Important docker commands with sample rest service


In this post, it will explain how to run simple rest service with Docker environment. Deploying a single entity in Docker environment is not the purpose of using docker. Docker is a container orchestration platform that can address auto-scalable, fault-tolerant features that cannot be addressed in monolithic models. But bellow commands will be important when you do your local developments.


1. Install docker

Remove previous docker versions : - sudo apt-get remove docker docker-engine docker.io
Update your Linux :- sudo apt-get update
Install docker :- sudo apt install docker.io
Start docker service :- sudo systemctl start docker
Enable docker at startup :- sudo systemctl enable docker
Check docker version :- sudo docker version
Add user group permission :- sudo usermod -aG docker $USER
Reboot your Linux :- sudo reboot

2. Create jar file that will expose REST API ( ex: testing.jar )

Create your own java application that will expose simple rest servcei

2. Create a file name called Dockerfile where the jar file is located

3. Add bellow content to Docker file

FROM openjdk:10
EXPOSE 8888
ADD testing.jar ./testing.jar
ENTRYPOINT ["java","-jar","-Duser.timezone=GMT+0530","testing.jar"]

4. Command to build and run the jar file
  1. sudo docker build --tag=testimage .
  2. sudo docker run -p 8888:8888 testimage &
  3. open browser and access your rest service http://localhost:port/yourservice
5. List Docker images


sudo docker image ls

6. List Docker containers ( Container is an instance of image can be run isolated )


sudo docker container ls

7. Go to Docker container console ( has small linux console )

sudo docker exec -it container_id  /bin/sh

8. Stop Docker container ( Then your rest service will be stopped )

sudo docker stop container_id

9. Remove Docker container

sudo docker rm container_name

10. Remove Docker image

sudo docker image rm -f image_id


Wednesday, April 1, 2020

Difference between each Java Version

Difference between each Java Version

Java Thread dump, Memory dump and Garbage Collector dump Analysis

In a situation like web server or app server or application runs much slower, then we have to investigate thread dump and memory dumps as well as Java Garbage collector dumps. Threads are running in synchronous manner in order to share computer resources without providing any notice on system delays to system users. 

Java Thread life-cycle and Thread Status





Steps to analyze Java thread dumps


1. Find the JVM that used in your app server / web server or application


2. Find out java processor ids by bellow command

     ps -ef | grep java

    Ex : dev64     8393  1641  6 14:04 pts/0    00:00:42 /common/Software/jdk-11


3. Get the thread dump using bellow command. It will dump to file called "threadump"

    /bin/jstack 8393 > threaddump

4. Upload dump file to available online analyzers. 

    Ex : https://fastthread.io/   ,    https://jstack.review/
   

5. It will display the content of thread dump. Similar to bellow samples


"http-bio-8180-exec-11425" daemon prio=10 tid=0x00007f89e00a7800 nid=0x3536 runnable [0x00007f89b5294000]

   java.lang.Thread.State: RUNNABLE


"http-bio-8443-exec-39" daemon prio=10 tid=0x00007fd680035000 nid=0x78b6 waiting for monitor entry [0x00007fd6c2994000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:678)
at com.testapp.getUserFromTbl(UserDao.java:90)

"http-bio-8443-exec-10" daemon prio=10 tid=0x00007fd6a0021800 nid=0x3702 waiting on condition [0x00007fd6c2a99000]

   java.lang.Thread.State: TIMED_WAITING 


 Check bellow items from thread dump Analysis

 
 5.1. Check Blocked and Deadlock threads. You can find actual method that involved in actual slowness of the application similar to bellow


"http-bio-8443-exec-39" daemon prio=10 tid=0x00007fd680035000 nid=0x78b6 waiting for monitor entry [0x00007fd6c2994000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:678)
at com.testapp.getUserFromTbl(UserDao.java:90)


In this thread dump, there is an issue related to line 90 in UserDao.java . Check java class for possible improvements.



5.2. Find Blocked and Deadloack thread names and search it in your application log. Here it is http-bio-8443-exec-39


5.3. Add additional start and end logs to the method getUserFromTbl. Print the total time taken for given method to get more insight of the issue.


5.4. Get another thread dump and check the Waiting and Timed Waiting thread count. If these counts getting increased time to time, then it makes system slowness.


5.5. Now you able to find out  laces that made system slowness. 


6. Think about possible improvements similar to bellow approaches.


      6.1. Check CPU and Memory utilization for each thread. If required increase memory and cpu.


               ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head


      6.2 Integrate your source code with code quality measurement tools and do the recommended improvements. Ex : sonar .


      6.3 Upgrade System libraries, JDK version and optimize your application/web server thread pools


      6.4 Introduce timeouts for integrations ( Ex: DB, APIs )


      6.5. Move from monolithic model to microservice models.




Steps to analyze Java memory dumps



1. Find process id of your application

    ps -ef | grep java

    Ex : dev64     8393  1641  6 14:04 pts/0    00:00:42 /common/Software/jdk-11


2. Get the memory dump from bellow command. It will dump to file called dump.hprof in current directory


     jmap -dump:live,format=b,file=dmp.hprof 8393


3. Install currently available java memory analyzer or Use online memory dump analyzers

 
 Ex : Eclipse memory analyzer , https://heaphero.io/

4. Upload and check the analysis of bellow reports


    1. Object count per each class ( Histogram )

        Ex: If String has many objects then you can fix the issue by introducing StringBuilder

    2. Memory utilization per each class, You can sort it by memory utilization


    3. Duplicated classes loaded by multiple class loaders.


    4. Memory leaks. They occurred whenever garbage collectors unable to collect memory objects which are no longer in use. You can avoid memory leaks by bellow good practices


        4.1. Reduce static variables including static collections which are resides full lifetime of the application

        4.2.  Close resources one use. You can use try-with-resources block
        4.3. Properly override equals() and hashCode() method to avoid duplicate objects
        4.4. Reduce non static inner classes
        4.5. Implement AutoClosealble and avoid finalizes for resource usage codes
     


Steps to analyze Java GC ( Garbage Collector ) dumps


   

In Java, there are two memory areas called stack and heap. Bellow table compares difference between stack and heap. Both are resides in RAM.






















Stack




















                  
Heap












Used for static memory allocations





























Used for dynamic memory allocations











Used for temporary variables created by 
















functions





























Used for entire application 











Thread level variables will be maintained

























Elements can be access global level 










Maintained by CPU FIFO mechanism

























Maintain by JVM GC










Allocation, De-allocation and access is very fast
























Comparatively slow




GC analysis is basically related to Heap memories. Heap area has three basic components. They are younger generation, old generation and permanent generation. New elements will be created in younger generation. Whenever younger generation getting filled, then JVM runs first level garbage cleaning.  It will remove all non referenced and non used objects. On specific time threshold, remaining younger generation will be moved to old generation. JVM again runs second level garbage cleaning on old generation. Permanent generation contains master data which are required by JVM for its operations. 



 


Steps to obtain Java GC dump


1. Add bellow JVM parameters to dump heap memory in to a dump file located at /home/gc.log


     -XX:+PrintGCDetails -verbose:gc -Xloggc:/home/gc.log

   
     Note : parameters may be changed based on JVM verison

2. Upload it to freely available tools or view the gc.log using text viewer

      Ex : https://gceasy.io/

3. You can find bellow parameters


    3.1. Allocated memory for younger generation, old generation and JVM master data

    3.2.  Max usage of each memory types
    3.3. GC pause time ( All user threads will be user stopped when GC runs )
    3.4. Any memory leaks ( Comes when GC unable to clean memory object which are no longer used )
 
4. GC health can be improved based on bellow configuration and coding best practices 

        4.1. Reduce static variables including static collections which are resides full lifetime of the application

        4.2. Close resources one use. You can use try-with-resources block
        4.3. Properly override equals() and hashCode() method to avoid duplicate objects
        4.4. Reduce non static inner classes
        4.5. Implement AutoClosealble and avoid finalizes for resource usage codes
        4.6. After above improvements,  add enough stack and heap spaces in JVM options
                 -Xss is initial size of stack space
                 -Xms and -Xmx for initial and max size of heap spaces

                  Note : JVM paramter names may be changed based on your JVM version


        4.7. StackOverFlowError and OutOfMemoryError comes whenever JVM has insufficient memory configurations in stack and heap respectively. Also you may required more improvements in your codes.