Enable Spring framework classes logging in web application

To enable spring framework classes  logging,  we just need to configure web.xml and
add log4j.xml file into WEB-INF ,  this can be help us to debug the
spring classes.

 

web.xml:

<context-param>
<param-name>log4jConfigLocation</param-name>
<param-value>/WEB-INF/log4j.xml</param-value>
</context-param>

<listener>
<listener-class>org.springframework.web.util.Log4jConfigListener</listener-class>
</listener>

WEB-INF/log4j.xml

<?xml version=”1.0″ encoding=”UTF-8″ ?>
<!DOCTYPE log4j:configuration SYSTEM “log4j.dtd”>
<log4j:configuration xmlns:log4j=”http://jakarta.apache.org/log4j/&#8221; debug=”false”>
<appender name=”STDOUT” class=”org.apache.log4j.ConsoleAppender”>
<param name=”Threshold” value=”debug” />
<layout class=”org.apache.log4j.PatternLayout”>
<param name=”ConversionPattern”
value=”%d{HH:mm:ss} %p [%t]:%c{3}.%M()%L – %m%n” />
</layout>
</appender>

<appender name=”springAppender” class=”org.apache.log4j.RollingFileAppender”>
<param name=”file” value=”C:/tomcatLogs/webApp/spring-details.log” />
<param name=”append” value=”true” />
<layout class=”org.apache.log4j.PatternLayout”>
<param name=”ConversionPattern”
value=”%d{MM/dd/yyyy HH:mm:ss}  [%t]:%c{5}.%M()%L %m%n” />
</layout>
</appender>

<!– <category name=”org.springframework”>
<priority value=”debug” />
</category>

<category name=”org.springframework.beans”>
<priority value=”debug” />
</category> –>

<category name=”org.springframework.security”>
<priority value=”debug” />
</category>

<!– <category
name=”org.springframework.beans.CachedIntrospectionResults”>
<priority value=”debug” />
</category>

<category name=”org.springframework.jdbc.core”>
<priority value=”debug” />
</category>

<category
name=”org.springframework.transaction.support.TransactionSynchronizationManager”>
<priority value=”debug” />
</category> –>

<root>
<priority value=”debug” />
<appender-ref ref=”springAppender” />
<appender-ref ref=”STDOUT”/>
</root>
</log4j:configuration>

Spring 4.0.3 Remote Invocation Using Spring HTTP Invoker with security

There are numerous ways of invoking remote methods in java like RMI, Using Webservices, EJB etc.  Following is an an example of invoking remote methods using spring http invoker.You need basic understanding of how spring works to get into this article.

Server Side: Server side code can be deployed on any application server.

Configuration:

Let us consider an example where we want to invoke the methods of the class “ProvisioningServiceImpl” remotely

httpinvoker-servlet.xml :
<bean id=”provisioningService”
class=”com.ravisha.spring.remote.httpinvoker.ProvisioningServiceImpl” />

<bean name=”/provisioningService”
class=”org.springframework.remoting.httpinvoker.HttpInvokerServiceExporter”>
<property name=”service” ref=”provisioningService” />
<property name=”serviceInterface” value=”com.ravisha.spring.remote.httpinvoker.ProvisioningService”/>
</bean>

Web.xml:

<servlet-name>httpinvoker</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>

Client Side:  You can have any stand alone java code for the same.

Configuration:

beans.xml:

<bean id=”provisioningService” class=”org.springframework.remoting.httpinvoker.HttpInvokerProxyFactoryBean”>
<property name=”serviceUrl” value=”http://localhost:8081/SpringRemoteServer4.0.3/provisioningService”/&gt;
<property name=”serviceInterface” value=”com.ravisha.spring.remote.httpinvoker.ProvisioningService”/>
<property name=”httpInvokerRequestExecutor”>
<bean class=”org.springframework.security.remoting.httpinvoker.AuthenticationSimpleHttpInvokerRequestExecutor”  />
</property>

</bean>

Code:

ApplicationContext context = new ClassPathXmlApplicationContext(“beans.xml”);
ProvisioningService provisioningService = (ProvisioningService)context.getBean(“provisioningService”);
String status = provisioningService.provisioin(“account1″);

For adding Authenication  you need to add the following

Web.xml:  Add the following Filters

<filter>
<filter-name>springSecurityFilterChain</filter-name>
<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
</filter>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

httppinvoker-servlet.xml

<security:http authentication-manager-ref=”authenticationManager”>
<security:http-basic/>
<security:csrf disabled=”true”/>
<security:intercept-url pattern=”/provisioningService” access=”hasRole(‘ROLE_USER’) “/>

</security:http>

<security:authentication-manager alias=”authenticationManager”>
<security:authentication-provider>
<security:user-service id=”uds”>
<security:user name=”test” password=”test”
authorities=”ROLE_USER” />
</security:user-service>
</security:authentication-provider>
</security:authentication-manager>

Client Side Bean:
<bean id=”provisioningService” class=”org.springframework.remoting.httpinvoker.HttpInvokerProxyFactoryBean”>
<property name=”serviceUrl” value=”http://localhost:8081/SpringRemoteServer4.0.3/provisioningService”/&gt;
<property name=”serviceInterface” value=”com.ravisha.spring.remote.httpinvoker.ProvisioningService”/>
<property name=”httpInvokerRequestExecutor”>
<bean class=”org.springframework.security.remoting.httpinvoker.AuthenticationSimpleHttpInvokerRequestExecutor”  />
</property>

</bean>

 

Apart from the above you need the respective jars.

 

Apache Kafka Hello World

Apache Kafka is an open-source message broker by apache written in scala. As per  Apache,  a single Kafka broker can handle hundreds of  megabytes of reads and writes per second from thousands of clients,Each broker can handle terabytes of messages without performance impact.

Just to quickly start with,  this is how Kafka Messaging service works and following are the major components (commands given below are provided with kafka binaries, installed kafka_2.11-0.9.0.0 for the same )

1) Zoo-keeper: Service required by kafka for maintaining  all the required configuration information and for providing distributed synchronization.
bin/zookeeper-server-start.sh config/zookeeper.properties  ( To start  zookeeper service )

2) Kafka-Server ( JMS Broker )   
bin/kafka-server-start.sh config/server.properties ( To start kafka JMS broker )

If we want a multi broker cluster , we can just have copy of  “server.properties”, and edit the broker id ( unique name for each node in the cluster ) and port for the same and pass the same for  other instance.   In a multi node environment, one node will act as a leader and is responsible for all read and write operations for a given partition,  and rest of the nodes
acts a followers and will be the replicas of the leader node.    In terms of fault tolerance, if leader node is down , one of the follower slaves will became leader and will be ready for the next set of write and read operations.

3) Topic:
kafka maintains messages in categories called topics, and each topic will maintain data in terms of partitions
bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test  ( create a Topic Test ,
Topic can be created using  kafka API as well )

4) Consumer:
Each message published to a topic is delivered to one consumer instance within each subscribing consumer group.
Consumer instances can be in separate processes or on separate machines.

**Apart from the above Kafka all provides connectors for reading and writing from external systems

Sample Producer and consumer:

Producer:

package com.ravisha.kafka.poc.example;
import java.util.*;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class TestProducer {
public static void main(String[] args) {
long events = 5; // Long.parseLong(args[0]);
Properties props = new Properties();
props.put(“metadata.broker.list”, “slc08fha:9092”);
props.put(“serializer.class”, “kafka.serializer.StringEncoder”);
// props.put(“partitioner.class”, “example.producer.SimplePartitioner”);
props.put(“request.required.acks”, “1”);

ProducerConfig config = new ProducerConfig(props);

Producer<String, String> producer = new Producer<String, String>(config);

for (long nEvents = 0; nEvents < events; nEvents++) {
String ip = “slc08fha”;
String msg = “this is from java code”;
KeyedMessage<String, String> data = new KeyedMessage<String, String>(“test”, ip, msg);
producer.send(data);
}
producer.close();
}
}

 

Consumer:

package com.ravisha.kafka.poc.example;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;

import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;

import com.ravisha.kafka.poc.ConsumerGroupExample;
import com.ravisha.kafka.poc.ConsumerTest;

public class TestConsumer {

private final ConsumerConnector consumer;
private final String topicName;

public TestConsumer(String zooKeeper, String groupID, String topicName){
consumer = kafka.consumer.Consumer.createJavaConsumerConnector(createConsumerConfig(zooKeeper, groupID));
this.topicName = topicName;
}

private static ConsumerConfig createConsumerConfig(String a_zookeeper, String a_groupId) {
Properties props = new Properties();
props.put(“zookeeper.connect”, a_zookeeper);
props.put(“group.id”, a_groupId);
props.put(“zookeeper.session.timeout.ms”, “400”);
props.put(“zookeeper.sync.time.ms”, “200”);
props.put(“auto.commit.interval.ms”, “1000”);

return new ConsumerConfig(props);
}

public void consume(){
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topicName, new Integer(1));
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topicName);
for (final KafkaStream stream : streams) {
ConsumerIterator<byte[], byte[]> it = stream.iterator();
while (it.hasNext())
System.out.println(new String(it.next().message()));

}

}

public static void main(String[] args) {
String zooKeeper = “slc08fha:2181”;
String groupId = “group3”;
String topic = “test”;
TestConsumer example = new TestConsumer(zooKeeper, groupId, topic);
example.consume();
}
}

 

 

 

 

 

Thread Pool With executors Framework

  • Single Thread Executor : A thread pool with only one thread. So all the submitted tasks will be executed sequentially. Method :Executors.newSingleThreadExecutor()
  • Cached Thread Pool : A thread pool that creates as many threads it needs to execute the task in parrallel. The old available threads will be reused for the new tasks. If a thread is not used during 60 seconds, it will be terminated and removed from the pool. Method : Executors.newCachedThreadPool()
  • Fixed Thread Pool : A thread pool with a fixed number of threads. If a thread is not available for the task, the task is put in queue waiting for an other task to ends. Method : Executors.newFixedThreadPool()
  • Scheduled Thread Pool : A thread pool made to schedule future task. Method : Executors.newScheduledThreadPool()
  • Single Thread Scheduled Pool : A thread pool with only one thread to schedule future task. Method :Executors.newSingleThreadScheduledExecutor()

Program to understand the above:

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

/**
* Following class is called a Job/Worker/Task
*
* @author ravisha
*
*/
class WorkerThread implements Runnable {
public void run() {
System.out.println(“This is a simple thread executor..”);
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}

public class TestThreads {
public static void main(String[] args) {
/**
* Testing the behavior with Single tasks, Ideally Executors are not
* required for this kind of requirement.
*/
       testAllExecutors();
/**
* Testing the behavior with multiple tasks, Executors/Thread Pools are
* best suited for this kind of requirement.
*/
       testNewCachedThreadPool();
       testNewfixedThreadPool();

}

private static void testAllExecutors() {

// Creating one only Task/Worker/Job
Runnable runnable = new WorkerThread();
// Behavior is the same when you have single Task to do, irrespective of
// your executors (except scheduled executors, Schuled
// Executors will take some time delay to start)
ExecutorService[] executorServiceArray = {
Executors.newSingleThreadExecutor(),
Executors.newSingleThreadScheduledExecutor(),
Executors.newCachedThreadPool(),
Executors.newFixedThreadPool(20),
Executors.newScheduledThreadPool(10000) };

for (ExecutorService executorService : executorServiceArray) {
executorService.execute(runnable);
executorService.shutdown();
}

}

private static void  testNewfixedThreadPool() {
// Creating multiple Tasks/Workers/Jobs,
Runnable[] workers = new WorkerThread[10];
// There are 10 tasks , but let us use fixed thread pool having 5
// threads, so
// that at any point of time only five tasks will be executing.
ExecutorService executorService = Executors.newFixedThreadPool(20);

for (Runnable runnable : workers) {
runnable = new WorkerThread();
executorService.execute(runnable);
}
executorService.shutdown();
}

private static void testNewCachedThreadPool() {

// Creating multiple Tasks/Workers/Jobs
Runnable[] workers = new WorkerThread[10];
// There are 10 tasks , and we don’t want any limitation on threads to
// execute, go for cached thread pool
ExecutorService executorService = Executors.newCachedThreadPool();
for (Runnable runnable : workers) {
runnable = new WorkerThread();
executorService.execute(runnable);
}
executorService.shutdown();
}

}