Friday, January 9, 2015

Apache Kafka JAVA tutorial #3: Once and only once delivery

In Apache Kafka introduction, I provided an architectural overview on the internet scale messaging broker. In JAVA tutorial 1, we learnt how to send and receive messages using the high level consumer API. In JAVA tutorial 2, We examined partition leaders and metadata using the lower level Simple consumer API.

A key requirement of many real world messaging applications is that a message should be delivered once and only once to a consumer. If you have used the traditional JMS based message brokers, this is generally supported out of the box, with no additional work from the application programmer. But Kafka has distributed architecture where the messages to a topic are partitioned for scalability and replicated for fault tolerance and hence the application programmer  has to do a little more to ensure once and only once delivery.

Some key features of the Simple Consumer API are:
  • To fetch a message, you need to know the partition and partition leader.
  • You can read messages in the partition several times.
  • You can read from the first message in the partition or from a known offset.
  • With each read, you are returned an offset where the next read can happen.
  • You can implement once and only once read, by storing the offsets with the message that was just read, thereby making the read transactional. In the event of a crash, you can recover because you know what message was last read and where the next one should be read.
  • Not covered in this tutorial, but the API lets you determine how many partitions there are for a topic and who the leader for each partition is. While fetching message, you connect to the leader. Should a leader go down, you need to fail over by determining who the new leader is, connect to it and continue consuming messages
For this tutorial you will need

(1) Apache Kafka 0.8.1
(2) Apache Zookeeper
(3) JDK 7 or higher. An IDE of your choice is optional
(4) Apache Maven
(5) Source code for this sample from https://github.com/mdkhanga/my-blog-code if you want to look at working code

In this tutorial, we will
(1) start a Kafka broker
(2) create a topic with 1 partition
(3) Send a messages to the topic
(4) Write a consumer using Simple API to fetch messages.
(5) Crash the consumer and restart it ( several times). Each time you will see that it reads the next message after the last one that was read.

Since we are focusing of reading messages from a particular offset in a partition, we will keep other things simple by limiting ourselves to 1 broker and 1 partition.

Step 1: Start the broker


bin/kafka-server-start.sh config/server1.properties

For the purposes of this tutorial, one broker is sufficient as we are reading from just one partition.

Step 2: Create the topic

bin/kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1 --topic atopic

Again for the purposes of this tutorial we just need 1 partition.

Step 3: Send messages to the topic

Run the producer we wrote in tutorial 1 to send say 1000 messages to this topic.

Step 4: Write a consumer using SimpleConsumer API

The complete code is in the file KafkaOnceAndOnlyOnceRead.java.

Create a file to store the next read offset. 

static {
    try {
      readoffset = new RandomAccessFile("readoffset", "rw");
    } catch (Exception e) {
      System.out.println(e);
    }

}

Create a SimpleConsumer.

SimpleConsumer consumer = new SimpleConsumer("localhost", 9092, 100000, 64 * 1024, clientname);

If there is a offset stored in the file, we will read from the offset. Otherwise, we read from the beginining of the partition -- EarliestTime.

 long offset_in_partition = 0 ;
    try {
      offset_in_partition = readoffset.readLong();
    } catch(EOFException ef) {
      offset_in_partition =     getOffset(consumer,topic,partition,kafka.api.OffsetRequest.EarliestTime(),clientname) ;
    }


The rest of the code is in a

while (true) {

}

loop. We will keep reading messages or sleep if there are none.

Within the loop, we create a request and fetch messages from the offset.

FetchRequest req = new FetchRequestBuilder()
          .clientId(clientname)
          .addFetch(topic, partition, offset_in_partition, 100000).build();
FetchResponse fetchResponse = consumer.fetch(req);


Read messages from the response.

for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(topic, partition)) {
        long currentOffset = messageAndOffset.offset();
        if (currentOffset < offset_in_partition) {
          continue;
        }
        offset_in_partition = messageAndOffset.nextOffset();
        ByteBuffer payload = messageAndOffset.message().payload();

        byte[] bytes = new byte[payload.limit()];
        payload.get(bytes);
        System.out.println(String.valueOf(messageAndOffset.offset()) + ": " + new String(bytes, "UTF-8"));
        readoffset.seek(0);
        readoffset.writeLong(offset_in_partition);
        numRead++;
        messages++ ;

        if (messages == 10) {
          System.out.println("Pretend a crash happened") ;
          System.exit(0);
        }
  }


For each message that we read, we check that the offset is not less than the one we want to read from. If it is, we ignore the message. For efficiency, Kafka batches messages. So you can get messages already read. For each valid message, we print it and write the next read offset to the file. If the consumer were to crash, when restarted, it would start reading from the last saved offset.

For demo purposes, the code exits after 10 messages. If you run this program several times, you will see that it starts reading exactly from where it last stopped. You can change that value and experiment.

Step 5: Run the consumer several times.

mvn exec:java -Dexec.mainClass="com.mj.KafkaOnceAndOnlyOnceRead"

210: 04092014 This is message 211
211: 04092014 This is message 212
212: 04092014 This is message 213
213: 04092014 This is message 214
214: 04092014 This is message 215
215: 04092014 This is message 216
216: 04092014 This is message 217
217: 04092014 This is message 218
218: 04092014 This is message 219
219: 04092014 This is message 220


run it again 

mvn exec:java -Dexec.mainClass="com.mj.KafkaOnceAndOnlyOnceRead"

220: 04092014 This is message 221
221: 04092014 This is message 222
222: 04092014 This is message 223
223: 04092014 This is message 224
224: 04092014 This is message 225
225: 04092014 This is message 226
226: 04092014 This is message 227
227: 04092014 This is message 228
228: 04092014 This is message 229
229: 04092014 This is message 230


In Summary, it is possible to implement one and only once delivery of messages in Kafka by storing the read offset.

Related Blogs:

Apache Kafka Introduction
Apache Kafka JAVA tutorial #1
Apache Kafka JAVA tutorial #2 
Apache Kafka 0.8.2 New Producer API  

Thursday, November 20, 2014

Apache Kafka Java tutorial #2

In the blog Kafka introduction, I provided an overview of the features of Apache Kafka, an internet scale messaging broker. In Kafka tutorial #1, I provide a simple java programming example for sending and receiving messages using the high level consumer API.  Kafka also provides a Simple consumer API that provides greater control to the programmer for reading messages and partitions. Simple is a misnomer and this is a complicated API. SimpleConsumer connects directly to the leader of a partition and is able to fetch messages from an offset. Knowing the leader for a partition is a preliminary step for this. And if the leader goes down, you can recover and connect to the new leader.

In the tutorial, we will use the "Simple" API to find the lead broker for a topic partition.

To recap some Kafka concepts
  • Broker in Kafka is a cluster of brokers
  • Messages are sent to and received from topics
  • Topics are partitioned across brokers
  • For each partition there is 1 leader broker and 1 or more replicas
  • Ordering of messages is maintained only within a partition
To manage read positions within a topic, it has to be done at partition level and You need to know the leader for that partition.

For this tutorial you will need

(1) Apache Kafka 0.8.1
(2) Apache Zookeeper
(3) JDK 7 or higher. An IDE of your choice is optional
(4) Apache Maven
(5) Source code for this sample from https://github.com/mdkhanga/my-blog-code if you want to look at working code

In this tutorial, we will
(1) create a 3 node kafka cluster
(2) create a topic with 12  partitions
(3) Write code to determine the leader of the partition
(4) Run the code to determine the leaders of each partition.
(5) Kill one broker and run again to determine the new leaders

Note that Kafka-topics --describe command lets you do the same. But we are doing it programatically for the sake of learning and because it is useful is some usecases.

Step 1 : Create a cluster

Follow the instruction is tutorial 1 to create a 3 node cluster.

Step 2 : Create a topic with 12 partitions

/usr/local/kafka/bin$ kafka-topics.sh --create --zookeeper host1:2181 --replication-factor 2 --partitions 12 --topic mjtopic

Step 3 : Write code to determine the leader for each partition

We use the SimpleConsumer API.
PartitionLeader.java

import kafka.javaapi.PartitionMetadata;
import kafka.javaapi.TopicMetadata;
import kafka.javaapi.TopicMetadataRequest;
import kafka.javaapi.consumer.SimpleConsumer;


SimpleConsumer consumer = new SimpleConsumer("localhost", 9092, 
        100000, 64 * 1024,  "leaderLookup");
List topics = Collections.singletonList("mjtopic");
TopicMetadataRequest req = new TopicMetadataRequest(topics);
kafka.javaapi.TopicMetadataResponse resp = consumer.send(req);

List metaData = resp.topicsMetadata();
int[] leaders = new int[12] ;

 for (TopicMetadata item : metaData) {
      for (PartitionMetadata part : item.partitionsMetadata()) {
          leaders[part.partitionId()] = part.leader().id() ;
      }
 }
for (int j = 0 ; j < 12 ; j++) {
      System.out.println("Leader for partition " + j + " " + leaders[j]) ;

}

SimpleConsumer can connect to any broker that is online. We construct a TopicMetadataRequest with the topic we are interested in and send it to broker with the consumer.send call. A TopicMetaData is returned which contains a set of PartitionMetaData ( one for each partition ). Each PartitionMetaData has the leader and replicas for that partition.

Step 4 : Run the code 

Leader for partition 0 1
Leader for partition 1 2
Leader for partition 2 3
Leader for partition 3 1
Leader for partition 4 2
Leader for partition 5 3
Leader for partition 6 1
Leader for partition 7 2
Leader for partition 8 3
Leader for partition 9 1
Leader for partition 10 2
Leader for partition 11 3


Step 5 : Kill node 3 and run the code again

Leader for partition 0 1
Leader for partition 1 2
Leader for partition 2 1
Leader for partition 3 1
Leader for partition 4 2
Leader for partition 5 1
Leader for partition 6 1
Leader for partition 7 2
Leader for partition 8 1
Leader for partition 9 1
Leader for partition 10 2
Leader for partition 11 1


You can see the broker 1 has assumed leadership for broker 3's partitions.

In summary, one of the things you can use the SimpleConsumer API is to examine topic partition metadata. We will use this code in future tutorials to determine the leader of a partition.

Related blogs:

Apache Kafka Introduction
Apache Kafka JAVA tutorial #1
Apache Kafka JAVA tutorial #3 
Apache Kafka 0.8.2 New Producer API 






Friday, October 10, 2014

ServletContainerInitializer : Discovering classes in your Web Application

In my blog on java.util.ServiceLoader, we discussed how it can be used to discover third party implementations of your interfaces. This can be useful if your application is a container that executes code written by developers. In this blog, we discuss dynamic discovery and registration for Servlets.

All Java Web developers are already familiar with javax.servlet.ServletContextListerner interface. If you want to do initialization when the application starts or clean up when it is destroyed, you implement the contextInitialized and contextDestroyed methods of this interface.

In Servlet 3.0 specification, they added a couple interesting features that help with dynamicity, that are particularly useful to developers of libraries or containers.

(1) javax.servlet.ServletContainerInitializer is another interface that can notify your code of application start.

Library or container developers typically provide an implementation of this interface. The implementation should be annotated with the HandlesTypes annotation. When the application starts, the Servlet container calls the OnStart method of this interface, passing in as a parameter a set of all classes that implement, extend or are annotated with the type(s) declared in the HandlesTypes annotation.

(2) The specification also add a number of methods to dynamically register Servlets, filters and listeners. You will recall that previously, if you needed to add a new Servlet to you application, you needed to modify web.xml.

Combining (1) and (2), it should be possible to dynamically discover and add Servlets to a web application. This is a powerful feature that allows you to make the web application modular and spread development across teams without build dependencies. Note that this technique can be used to discover any interface, class or annotation. I am killing 2 birds with one stone by using this to discover servlets.

In the rest of the blog, we will build a simple web app, that illustrates the above concepts. For this tutorial you will need

(1) JDK 7.x or higher
(2) Apache Tomcat or any Servlet container
(3) Apache Maven

In this example we will

(1) We will implement SevletContainerInitializer called WebContainerInitializer and package it in a jar containerlib.jar.
(2)  To make the example interesting, we will create a new annotation @MyServlet, which will act like the @WebServlet annotation in the servlet specification. WebContainerInitializer will handle types that are annotated with @MyServlet.
(3) We will write a simple web app that has a Servlet annotated with @MyServlet and has containerlib.jar in the lib directory. No entries in web.xml.
(4) When the app starts, the servlet is discovered and registered. You can go to a browser and invoke it.

Before we proceed any further, you may download the code from my github respository, So you can look at the code as I explain. The code for this example is in the dynamicservlets directory.

Step 0: Get the code

git clone https://github.com/mdkhanga/my-blog-code.git

dynamicservlets has 2 subdirectories: containerlib and dynamichello.

The containerlib project has the MyServlet annotation and the WebContainerInitializer which implements ServletContainerInitializer.

DynamicHello is a web application that uses containerlib jar.

Step 1: The MyServlet annotation
MyServlet.java
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface MyServlet {   
    String path() ;
}

The annotation applies to classes and is used as
@MyServlet(path = "/someuri")

Step 2: A Demo servlet
HelloWorldServlet.java
@MyServlet(path = "/greeting")
public class HelloWorldServlet extends HttpServlet {
     

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        PrintWriter p = response.getWriter() ;
        p.write(" hello world ");
        p.close();
    }
   
}


This is a simple hello servlet that we discover and register. Nothing needs to be added to web.xml.

Step 3: WebContainerInitializer
WebContainerInitializer.java
This is the implementation of ServletContainerInitializer.

@HandlesTypes({MyServlet.class})
public class WebContainerInitializer implements ServletContainerInitializer {

    public void onStartup(Set> classes, ServletContext ctx)
            throws ServletException {
       
        for (Class c : classes) {
            MyServlet ann = (MyServlet)c.getAnnotation(MyServlet.class) ;       
            ServletRegistration.Dynamic d = ctx.addServlet("hello", c) ;
            d.addMapping(ann.path()) ;
           
        }

    }


The implementation needs to be in separate jar and included as a jar in the lib directory of the application war. WebContainerInitializer is annotated with @HandleTypes that takes MyServlet.class as parameter. When the application starts, the servlet container finds all classes that are annotated with MyServlet and passes them to the onStartup method. In the onStartup method, we go through each class found by the container, get the value of the path attribute from the annotation and register the servlet.

To make this work, we need one more thing, which is in the META-INF/services directory, a file whose name is javax.servlet.ServletContainerInitializer, which contains 1 line com.mj.WebContainerInitializer. If you are wondering why this is required, please see my this blog.

Step 4: Build and run the app

To build,
cd containerlib
mvn clean install
cd dynamichello
mvn clean install

This builds dynamichello/target/dynamichello.war that can be deployed to tomcat or any servlet container.
When the application starts, you will see the following messages in the log

Initializing container app .....
Found ...com.mj.servlets.HelloWorldServlet
path = /greeting


Point you browser to http://localhost:8080/hello/greeting.

The servlet will respond with a hello message.

In summary, this technique can be used to dynamically discover classes during application startup. This is typically used to implement libraries or containers such as JAX-RS implementation. This allows implementations to be provided by different developers. There is no hard wiring.

Saturday, September 20, 2014

Discovering third party API/SPI implementations using java.util.ServiceLoader

One interface, many implementations is a very well known object oriented programming paradigm. If you write the implementations yourself then you know what  those implementations are and you can write a factory class or method that creates and returns the right implementation. You might also make this config driven and inject the correct implementation based on configuration.

What if third parties are providing implementations of your interface? If you know those implementations in advance, then you could do the same as in the case above. But one downside is that code change is required to add or use new implementations or to remove them. You could come up with a configuration file, where implementations are listed and your code uses the list to determine what is available. Downside is that configuration has to be updated by you and this is non standard approach, in that, every API developer could come up with his own format for the configuration. Fortunately JAVA has a solution.

In JDK6, they introduced java.util.ServiceLoader, a class for discovering and loading classes.

It has a static load method that can be used to create a ServiceLoader that will find and load all of a particular Type.

public static<T> ServiceLoader<T> load(Class<T> service)

You would use it as
ServiceLoader<SortProvider> sl = ServiceLoader.load(SortProvider.class) ;
This creates a ServiceLoader that can find and load every SortProvider in the classpath.

The Iterator method returns an Iterator to the implementations founds that will be loaded lazily.
Iterator<SortProvider> it_sl = sl.Iterator() ;

You can iterate over what is found and store it in a Map or somewhere else in memory.
while (its.hasNext()) {
            SortProvider sp = its.next() ;
            log("Found provider " + sp.getProviderName()) ;
            sMap.put(sp.getProviderName(),sp) ;
}

How does ServiceLoader know where to look ?
  • Implementors package their implementation in a jar
  • jar should have a META-INF/services directory
  • services directory should have a file whose name is the fully qualified name of the Type
  • file has a list of fully qualified name of implementations of type
  • jar is installed to the classpath
I have a complete API/SPI example for a Sort interface below that you can download at https://github.com/mdkhanga/my-blog-code. This sample is in msort directory. You should download the code first, so that you can look at code while reading the text below. This example illustrates how ServiceLoader is used to discover implementations from third party service providers. Sort interface can be used for sorting data. Service providers can provide implementations of various Sort algorithms. In the example,

1. com.mj.msort.Sort is the main Sort API. It has 2 sort methods. One for Arrays and one of
 collections. 2 implementations are provided - bubblesort and mergesort. But anybody can write additional implementations.
 
2. com.mj.msort.spi.SortProvider is the SPI.Third party implementors of Sort must also implement the SortProvider interface. The SPI provides another layer of encapsulation. We don't want to know the implementation details. We just want an instance of the implementation.

3. SPI providers need to implement Sort and SortProvider.

4. com.mj.msort.SortServices is a class that can discover and load SPI implementations and make them available to API users. It uses java.util.ServiceLoader to load SortProviders. Hence SortProvider also needs to be packaged as required by java.util.ServiceLoader for it to be discovered.

This is the class that brings everything together. It uses ServiceLoader to find all implementations of SortProviders and stores them in a Map. It has a getSort method that programmers can call to get a specific implementation or whatever is there.

5.  Sample Usage

Sort s = SortServices.getSort(...
s.sort(...

In summary, ServiceLoader is a powerful mechanism to find and load classes of a type. It can used to build highly extensible and dynamic services. As an additional exercise, you can create your own implementation of SortProvider in your own jar and SortServices will find it as long as it is on the classpath.

Tuesday, August 26, 2014

Android programming tutorial

Android is an open source linux based operating system for mobile devices like smart phones, tables and other devices. The purpose of this blog is to introduce a developer to android development. There are already many tutorials for Android. So why another ? Mobile development is fun and easy. But despite lots of documentation from Google and several blogs, the initial startup for new developer is not easy. There is substantial trial and error even for the experienced programmer before you get comfortable with the development process.

In the rest of the blog I will
  • Describe some android application concepts
  • Describe what SDKs and tools you need to download
  • Develop a very simple android application.
This blog will be most useful when used in conjunction with the official Android developers documentation. There are new terms like Activity or Layout that I describe only briefly. You should read more about it from the original documentation.

Concepts

  • Android applications are mostly developed in JAVA.
  • Android development is like any other event driven UI development. Layout UI elements on the screen and write code to handle event like user tapping a button or a menu option.
  • An activity is a single screen of an application that a user interacts with. 
  • An application may have many activities. Each activity has a layout that describes how the user interface widgets are layed out on the screen.
  • Activities communicate by sending Intents to each other. For example, if by clicking a button, a particular screen needs to replace the current one, the current activity will send an intent to the one that needs to come to the foreground.
  • Android SDK supports all the UI elements like text boxes, buttons, lists , menus, action bar etc that are necessary to build a UI.
  • The layouts determine how the UI elements are positioned on the screen respective to each other. With LinearLayout, the UI elements are positioned one after the other.  With RelativeLayout, the UI elements are positioned relative to one another.
  • Additionally, there are APIs
    • to store data to a file or to a local SQLite relational database.
    • to phone other devices.
    • to send text messages to other devices.
    • to send messages to other applications.
  • Using HTTP, REST or other general purpose client libraries, you can make requests to remote servers.
  • Most of the time, any JAVA library that you can use in any JAVA application is generally usable in Android. ( of course sometimes there are issues such as supported JDK versions)
Required Tools
  • JAVA SDK  
  • Android Studio
    • This has the Android SDK and an IntelliJ based IDE.
    • You could also use the eclipse ADT or just the plain SDK with command line.
    • For this tutorial I have used Android studio 0.8.2.
  • Optional - A mobile device
    • Android SDK has emulators that you can run the app on. But they are slow.
    • Running on a real device gives more satisfaction. I used a Nexus 7. 
  • Optional - Download the source code for the tutorial below from https://sites.google.com/site/khangaonkar/home/android
In the rest of the blog we will work through a very simple tutorial to develop an android application.

Tutorial

Step 1: Download the android SDK

Download the android SDK from http://developer.android.com/sdk/installing/index.html. The SDK is available in 3 flavors : eclipse ADT , android studio (intelliJ) and commandline. For this tutorial, I used android studio because that seems to be the recommended direction from google. But (except on MacOs) eclipse works fine as well.

Step 2 : Create a new project

Start Android Studio
Select File > New Project
Enter Application name and click next
Accept default for form factors and click next
Select the default blank activity and hit next
Select the defaults for the new activity and hit finish

You should see a project as shown below















Step 3: Create an emulator
An emulator lets you test your application on a variety of devices without actually having the device. Let us create a Nexus 7 emulator.

Click Tools > Android > AVD Manager
Click create and enter the information as shown below




















Click Ok
Select the created device and hit Start
This will a take a couple of minutes. The emulators are slow. Eventually you will see the window shown below













In the main project, in the lower window, you should see that the emulator is detected.








Caution: Emulators are very slow and take a lot of time to start. The first time I install a new version of AppStudio or eclipse ADT, they almost never work. It takes a little bit of trial and error to get them going.

Step 4 : Run the application

Click Run > Run App
When prompted, Select the emulator
The default apps shows hello world on the screen













Step 5: Review generate files

Under Greeting/app/ src/main/java is the class com.mj.greeting.MyActivity. This is the main class the represents the logic around what is shown on the screen.
line 17  is setContentView(R.layout.activity_my);
This line sets the layout that is displayed on the screen. The layout is defined as an xml file Greeting/apps/src/main/res/layout/activity_my.xml. The LayoutManager and any UI elements like editboxes , buttons etc and their properties are defined here. In this case, a RelativeLayout surrounds a Textview whose default value is Hello World.

Step 6: Add some new code
Let us add an edittext box and a button to the UI. The user can type a message in the editbox and then click the button. On clicking the message replaces what is displayed in the textview.

In the file Greeting/apps/src/main/res/layout/activity_my.xml

add an android:id to the relativelayout
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/main"


add an android:id to the textview
        android:id="@+id/textview"
        android:text="@string/hello_world"


The ids will let us reference these widgets in code.

Add an edittext box
<EditText
        android:id="@+id/edittext"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_below="@id/textview"
        android:ems="10"
        android:layout_marginTop="10dp"
        android:text="greeting" android:inputType="text" />


and a button
<Button
        android:id="@+id/button"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_below="@+id/edittext"
        android:layout_marginTop="10dp"
        android:text="Update Greeting"
        android:onClick="onClick"/>


OnClick attribute references the method that is called when the user clicks the button. So we will need to add an onClick method implementation

To the class com.mj.greeting.MyActivity add the method
public void onClick(View v) {
        View main = this.findViewById(R.id.main) ; // get a reference to the current view
        EditText edit = (EditText) main.findViewById(R.id.edittext) ; // get a reference to the edittext
        TextView tv= (TextView) main.findViewById(R.id.textview) ; // get the textview
        tv.setText(edit.getText()); // get the text entered in edittext and put it in the textview
    }


Run the application




















Step 7: Run on a real device

So far we have been running the application on a emulator. It is much more fun to run on a real device. Enable USB debugging on your device.  On the Nexus 7, USB debugging is enabled by selecting the option in Settings/Developer Options.

Connect it to your development machine with a USB cable. do Run > Run App

The application will be installed and run on the device.



















In summary, getting started with mobile development is simple and fun once you get comfortable with the concepts and tools.






Saturday, July 26, 2014

Distributed Systems : Consensus Protocols

Modern software systems scale by partitioning the data and distributing data across several machines. Systems are made highly available by replicating data across multiple machines. When multiple systems are involved in managing state, they need to agree when a particular piece data needs to change.

You are familiar with the concept of a transaction in a relational database. A transaction is a unit of work (like a insert or update or some combination of multiple statements) that as a whole can be committed and aborted. What if the work involves updating multiple databases that are on different machines ? To ensure consistent state across the system, all the databases should agree on what to do, whether to commit or abort the state change.

Modern distributed NoSql databases have a similar but slightly different problem. If you had a single server and set a value v=8 in the server. There is no doubt what the value of v is. Any client that connects to the server reads the value as 8.  What if you had a cluster of 3 servers ? Would a client connecting to one the servers see the value as 8 ? Consensus is required to ensure all servers agree on what the value of v is.

Consider systems like Apache Zookeeper or Apache Cassandra. To ensure high availability, clients can connect to any node in the cluster and read  or write data. To ensure consistency in the cluster, some consensus is required among the nodes in the cluster when state changes.

In the rest of this blog we briefly cover some distributed protocols starting with two phase commit, which users of relational databases are very familiar with. We will then talk about Paxos , ZAB and Raft. Paxos became popular because it was used by google for its distributed systems. ZAB is used by Zookeeper which is an important component of the Hadoop echosystem. These protocols are hard to understand and no attempt is made to go into detail. The purpose is to introduce readers to some of the consensus concepts that are important in distributed systems.

1. Two phase commit

Used in databases to ensure all participants in distributed updates either commit or abort the changes.
One node called the co-ordinator originates the transaction.

1.1 Co-ordinator sends a prepare message to all participants.
1.2 Each participant replies with a yes if it can commit its part of the transaction or No otherwise.
1.3 If the co-ordinator receives a yes from all participants, it sends a commit message to the participants. Otherwise it sends an abort message.
1.4 If the participant receives a commit message, it commits its change. If it receives an abort message, it aborts the change. In both cases, it sends an acknowledgement back to the co-ordinator.
1.5 Transaction is complete when the coordinator receives all acknowledgments.

One limitation of this protocol is that if the co-ordinator crashes, the participants do not know whether to commit or abort the transaction, as they do not know how the other participants responded.

2. Three phase commit

The protocol attempts to let the participants make progress even if the co-ordinator fails.

2.1 Co-ordinator sends a prepare message to all participants.
2.2  Each participant replies with a yes if it can commit its part of the transaction or No otherwise.
2.3. If the co-ordinator receives yes from all of participants, it send a pre-commit  message to all participants.
2.4 When the co-ordinator receives an acknowledgment from a majority of participants, it sends a commit message to all participants.

If the co-ordinator fails, the participants can communicate with each other and determine whether to commit or abort.

3. PAXOS

Paxos was first published in that nineties but it became more popular after Google implemented and used it in its distributed infrastructure. The protocol is notorious for being difficult to understand. Below is a very brief description. See references for more details.

There are nodes that propose values called proposers and that accept values called acceptor.

3.1 A proposer with a value to propose submits a proposal (v,n) with value v and sequence number n.

3.2.  When an acceptor receives a proposal (v,.n), it compares it with the highest version proposal accepted for that value. If this proposal is higher version that any accepted proposal, the acceptor replies agree and sends the value of any previously accepted proposal. If the acceptor has already accepted a higher version, it rejects the current proposal.

3.3 If the proposer receives agree from majority of acceptors, it can pick one of the values sent by the acceptors. If they acceptors have not sent any value, it can pick its own value. It then sends a commit message with the chosen value to acceptors. If majority reject or do not respond, abort this proposal and try another one.

3.4  When the acceptor receives a commit message, it agrees to commit if the sequence number is the highest it has agreed to or if the value is the same as the last accepted proposal. Otherwise it rejects the commit.

3.5 If a majority accept the commit, the proposal is complete. Otherwise abort and try again.

Key takeaway is that majorities are used to accept proposal. If there are multiple proposers competing for a value, it is possible that no progress is made in accepting values. The solution is to elect a leader that proposes values. Other players in the system could be learners who learn about accepted values from either the leader or other participants.

4. ZAB (Zookeeper Atomic Broadcast)

ZAB was developed for use in Apache Zookeeper due to limitations in PAXOS. In Zookeeper , the order in which changes are applied in important. In PAXOS, it is possible that updates get applied by acceptors out of order.

ZAB is similar to PAXOS in that a leader proposes values and values are accepted based on majority vote. The key difference is that strict order of updates is maintained. If the leader crashes and a new leader is elected, the updates are applied in the original order.

5. RAFT

RAFT is another distributed consensus protocol that claims to be simpler that PAXOS or ZAB

A node can either be a leader, follower or candidate.

5.1 By default all nodes are followers. When there is no leader, a node can make itself a candidate for leadership and solicit votes.

5.2 The candidate that gets majority votes is elected leader.

5.3 A client submits its updates to the leader. Leader updates a log (uncommitted) and sends the update to followers.

5.4 When leaders hears from a majority of followers that they have made the update, leader commits the change and informs the followers of the commit

5.5 Followers commit the update.

5.6 If a leader terminates for some reason, one of the followers turns itself into a candidate and gets elected as the leader.

We have a given a brief description of some consensus protocols. If you use Hadoop, Cassandra, Kafka or similar distributed systems, you will run into these protocols. For more details, some references are provided below.

References:

1.  Database Management Systems by RamaKrishnan and Gehrke
2. PAXOS made simple
PAXOS by example
4. The secret lives of data
5. Apache Zookeeper
6. Paxos paper trail
7. Raft Consensus

Friday, June 27, 2014

Apache Cassandra : Things to consider before choosing Cassandra

A lot has been written about NoSql databases. There is lot of hype surrounding many of these databases. Unfortunately most written material either sings praises about a particular database or trashes it. I also am starting to see people pick databases for the wrong reasons. The purpose of this blog is to highlight the reasons to consider while choosing Cassandra as your database.

1. Scaling by partitioning data

Cassandra is designed to store large quantities of data - several hundreds of terrabytes or petabytes that typically cannot be stored on a single machine. Cassandra solves the problem by partitioning the data across machines in a cluster using a consistent hash. When data is partitioned across several machines, some of the things we are used to in relational databases like consistency and transactions are difficult to implement. Hence those features are weak or in some cases not available. So the ability to scale comes at the expense of other features.

The single biggest mistake people make is using Cassandra when their size of data is not large enough to merit partitioning. If in the foreseeable future, you data size is a few hundred gigabytes , stick to mysql or other relational database of your choice. Even if your data size grows in the the future, you can always port to Cassandra when you reach the stage of few TB. This is especially true if you are building a new application with limited resources. Do not let the complexity of Cassandra slow down the rest of your feature development.

2. High availability

The CAP theorem states that out of consistency, availability and partition tolerance , it is possible for a system to have only 2 of 3. Cassandra is designed for availability and partition tolerance.

If your applications primary requirement is high availability, Cassandra can be a great choice. With its shared nothing architecture, where all nodes are equal, multiple nodes can go down and the database is still available. Clients can connect to any node and that node with get/put the data to the node that is required to handle that data. Replication ensures that if the primary node that handles the data goes down, a replica is able to service the request.

3. Replication

Replication has 2 purposes : One it provides redundancy for data in case of failures. Second, it makes copies of data available closer to where it is consumed or served. In many databases, setting up replication is cumbersome. Not in Cassandra. Replication is core to the architecture.  Replication is configured at a keyspace level by specifying a replication strategy and the number of replicas and the data is replicated within the cluster or across data centers as required.

If replication is important, especially across data centers, Cassandra is a great choice.

4. Optimized for writes

Write operations update a in memory data structure called Memtable and returns immediately. Nothing is locked and nothing is written to disk. Writes are very fast. When Memtables reach a certain size, they are flushed to disk to a file called SSTable. Reads may have to go through multiple SStables and aggregate changes to return correct data. For this reason, reads might not be that fast.

If you have a workloads that involves a lot of writes and few reads then Cassandra is a suitable database. A common use case is storing data from log files of high volume production web servers that service several billion requests a day. An analytics application would potentially read the data, but the read volume is low because the reads are done by in house business analysts and not internet users.

5. Compaction

Over time several SSTables get created and reads have to go through multiple SSTables to get to data. Periodically Cassandra will asynchronously merge smaller SSTables into large SSTables. People have complained that during compaction, things slow down and throughput degrades. There are probably ways to tune this, but you should be aware of compaction when using Cassandra.

6. Limited querying capability

Cassandra supports a SQL like language called CQL. It is "SQL like" and not SQL. Many very basic things like aggregation operators are not supported. Joins of tables are not supported. Range queries on partition key are not supported. Range queries are possible within a partition key using the clustered columns, but it requires some additional data modeling. 

Bottom line is that Cassandra partitions the data based on consistent hash of the partition key and look ups are possible based only on the look up key. Anything else requires additional modeling that involves what is called clustered columns.

7. Consistency model

Cassandra was inspired by Amazon's Dynamo database where the model was eventual consistency. When a client requested data and there was inconsistency between the values in the nodes  of a cluster, the server returned a vector clock to the client and it was the responsibility of the client to resolve any conflict.

Cassandra's model is tunable consistency. For a read or write operation, client can specify a consistency level such as ANY, ALL, QUORUM, ONE , TWO etc. However when there are concurrent writes, the order is determined based on machine time stamps. So it is important that clocks on nodes in the cluster be synchronized. Getting the consistency model to work requires time and effort on the part of the developer. If the kind of strong consistency we are used to in relational databases is important to you, Cassandra will not be a suitable choice.
 
8. Frequent updates

Based on what is discussed in (4) and (8) Cassandra is not suitable for use cases where you update column values frequently. When concurrent updates happens, Cassandra uses timestamps to determine which update happened first and you could sometimes encounter the lost update problem. To work around the problem, what you have to do is append updates to a collection or wide columns and then aggregate the final value on reads. Again this is additional work in data modeling and programming and you might be better off using another database if frequent updates are an integral part of your use case.

In summary, Cassandra is an excellent choice as a database when your data size is very large and high availability or replication are important. But it is not a general purpose database. Some of the scalability comes at a cost and you give up other features like consistency or querying.

For additional information on Cassandra, please check DataStax documentation. You can also read these blogs:
HBase vs Cassandra
Cassandra Data Model
Cassandra Compaction