tag:blogger.com,1999:blog-50080173115105689442024-02-23T18:03:02.791-08:00The Khangaonkar ReportMJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.comBlogger80125tag:blogger.com,1999:blog-5008017311510568944.post-83005044416633956062022-02-28T16:04:00.000-08:002022-02-28T16:04:29.690-08:00Quick Review: Mysql NDB cluster<p> This is a quick 2 min overview of Mysql NDB Cluster. The goal is to help you decide within a minute or two, whether this is an appropriate solution for you.</p><p>Cluster of in-memory Mysql databases with a shared nothing architecture.</p><p>Consists of Mysql nodes and data nodes.</p><p>Mysql nodes are Mysql servers that get data from data nodes. Data nodes hold the data using the NDB storage engine. There are also admin nodes.</p><p>NDB nodes serve the data from memory. Data is persisted at checkpoints.</p><p>Data is partitioned and replicated.</p><p>Up to 48 data nodes and 2 replicas for each fragment of data.</p><p>ACID compliant.</p><p>READ_COMMITTED isolation level.</p><p>Sharding of data is done automatically. No involvement of user or application is required.</p><p>Data is replicated for high availability. Node failures are handled automatically.</p><p>Clients can access data using NDB Api. Both SQL and NOSQL styles are possible.</p><p>This is not a good general purpose database. It is suitable for certain specific use cases of telecom and game but not for general OLTP.</p><p>Feels like it has too many moving parts to manage.</p><p>High performance -- it is serving data from memory.</p><p><b>Summary</b></p><p>Not a general purpose distributed databases. Unless you are in telecom or gaming or know for sure why this meets your use case, don'nt even think about it.</p><p>If you are on Mysql and want high availability, try <a href="https://khangaonkar.blogspot.com/2022/02/quick-review-mysql-innodb-cluster.html">Mysql InnoDb Cluster</a>, which is much easier to understand and use.</p><p><b>References:</b></p><p><a href="https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster.html">Mysql Documentation for NDB cluster</a></p><p><br /></p>MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-73737321407075624372022-02-14T16:27:00.001-08:002022-02-14T16:31:59.198-08:00Quick Review: Mysql InnoDb Cluster<p> </p><p>This is a quick 2 min overview of Mysql InnoDb Cluster. The goal is to help you decide within a minute or two, whether this is an appropriate solution for you.</p><p>Simple HA solution for Mysql.</p><p>Built on top of MySql group replication.</p><p>It has 3 Components:</p><p>Replication: Uses existing mysql asynchronous replication capabilities. Default is Primary and secondary configuration. Writes go to master which replicates to slaves. Slaves can service reads</p><p>Mysql router: Provides routing between your application and the cluster. Supports automatic failover. If the primary dies. The router will redirect writes to the secondary that takes over.</p><p>Mysql shell: This is an advance shell that let you code and configure the cluster.</p><p>Works best over a local area network. Performance degrades over wide area networks</p><p>Easy to setup. Simple commands that are entered on the mysql shell. </p><p><span class="token keyword" style="background: rgb(248, 248, 248); border: 0px; box-sizing: inherit; color: #0077aa; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">var</span><span style="background-color: #f8f8f8; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; white-space: pre;"> cluster </span><span class="token operator" style="background: rgb(248, 248, 248); border: 0px; box-sizing: inherit; color: #a67f59; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: #f8f8f8; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; white-space: pre;"> dba</span><span class="token punctuation" style="background: rgb(248, 248, 248); border: 0px; box-sizing: inherit; color: #999999; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">.</span><span class="token function" style="background: rgb(248, 248, 248); border: 0px; box-sizing: inherit; color: #dd4a68; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">createCluster</span><span class="token punctuation" style="background: rgb(248, 248, 248); border: 0px; box-sizing: inherit; color: #999999; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">(</span><span class="token string" style="background: rgb(248, 248, 248); border: 0px; box-sizing: inherit; color: #669900; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">'<em class="replaceable" style="background: transparent; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">testCluster</em>'</span><span class="token punctuation" style="background: rgb(248, 248, 248); border: 0px; box-sizing: inherit; color: #999999; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">)</span></p><p><span style="color: black; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; white-space: pre;">cluster</span><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; color: #999999; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">.</span><span class="token function" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; color: #dd4a68; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">addInstance</span><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; color: #999999; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">(</span><span class="token string" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; color: #669900; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">'server1@host1:3306'</span><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; color: #999999; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;">)</span></p><p><span class="token punctuation" style="background: rgb(248, 248, 248); border: 0px; box-sizing: inherit; color: #999999; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;"><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="color: black;">cluster</span><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">.</span><span class="token function" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; color: #dd4a68; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">addInstance</span><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(</span><span class="token string" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; color: #669900; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">'server2@host2:3306'</span><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">)</span></span></span></p><p><span class="token punctuation" style="background: rgb(248, 248, 248); border: 0px; box-sizing: inherit; color: #999999; font-family: "Liberation Mono", Consolas, Monaco, "Andale Mono", "Ubuntu Mono", monospace; font-size: 12.8304px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline; white-space: pre;"><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="color: black;">cluster</span><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">.</span><span class="token function" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; color: #dd4a68; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">addInstance</span><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(</span><span class="token string" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; color: #669900; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">'server3@host3:3306'</span><span class="token punctuation" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; box-sizing: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">)</span></span></span></span></p><p>Cluster elects the primary. If you want a particular server to be the primary, you can give it extra weight.</p><p>Client do not connect directly to the servers. Rather they connect to the Mysql router that provides the routing as well failover.</p><p>MySql InnoDB clusterSet provide additional resiliency by replicating data from a primary cluster to a cluster in another datacenter or location. If the primary cluster becomes available, one of the secondary cluster can become the primary.</p><p><b>Summary</b></p><p>Provides scalability for reads and some HA for Mysql deployments. Simple, easy to use solution. No sharding. Some consistency issues will there when you read from replicas that lag a little bit</p><p>References:</p><p>https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-innodb-cluster.html</p><p><br /></p>MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-82855835920479002702020-11-01T06:26:00.000-08:002020-11-01T06:26:03.120-08:00Building Globally Distributed Applications<div dir="ltr" style="text-align: left;" trbidi="on"><p class="graf graf--p" name="5e71">A globally distributed application is one where the services and data for the application are partitioned and replicated across multiple regions over the globe. Popular distributed applications that everyone is familiar with are Facebook, Amazon.com, Gmail, Twitter, Instagram. However more and more enterprise applications are finding the need to become distributed because their user base is increasingly distributed around the globe. But not every company has the expertise of a Facebook or Amazon or Google. When going distributed, it is not enough to just spin up instances of your service on AWS or Google cloud on various regions. There are issues related to data that must be addressed for the application to work correctly. While consumer centric social media applications can tolerate some correctness issues or lags in data, the same might not be true for enterprise applications. This blog discusses the data and database issues related to a globally distributed application. Lastly, we discuss 2 research papers that been around since early part of this decade, but whose relevance is increasing in recent times.</p></div><div dir="ltr" style="text-align: left;" trbidi="on">Building globally distributed applications that are scalable, highly available and consistent can be challenging. Sharding has to be managed by the application. Keep it highly available requires non database tools. When you have been on a single node database whether it is Mysql or Postgresql etc, it is tempting to scale by manual sharding or one of the clustering solutions available for those databases. It might appear easy at the beginning but the cost of managing the system increases exponentially with scale. Additionally, sharding and replication lead to consistency issues and bugs that need to be addressed. <em class="markup--em markup--p-em">Scaling with single node databases like Mysql beyond a certain point has extremely high operational overhead.</em></div><div dir="ltr" style="text-align: left;" trbidi="on"><br /></div><div dir="ltr" style="text-align: left;" trbidi="on">NoSql databases such as Cassandra, Riak, MongoDB etc offer scalability and high availability but at the expense of data consistency. That might be ok for some social media or consumer applications where the dollar value of individual transaction is very small. But not in enterprise applications where the correctness of each transaction is worth several thousands of dollars. In enterprise applications, we need distributed data to behave the same way that we are used to with single node databases.</div><div dir="ltr" style="text-align: left;" trbidi="on"><p class="graf graf--p" name="9098">Let us look at some common correctness issues that crop up with distributed data.</p><h4 class="graf graf--h4" name="ceb1">Example 1 : A distributed on line store with servers in San Francisco, New York and Paris.</h4><p class="graf graf--p" name="e2d1">Each server has 2 tables products and inventory with the following data.<br /><em class="markup--em markup--p-em">Products</em>:(product)<br /> widget1<br /> widget2<br /><em class="markup--em markup--p-em">Inventory:</em> (product, count):<br />widget1,6<br />widget2,1</p><p class="graf graf--p" name="a60c">Customer Jose connects to server in San Francisco and buys widget2 at time t1. At time t2, Customer Pierre connects to a server in Paris and also buys widget2. Assume t2 > t1 but t2-t1 is small.</p><p class="graf graf--p" name="2794">Expected Behavior : Jose successfully completes transaction and gets the product. Since inventory of widget2 is now zero, Pierre’s transaction is aborted.<br />Actual Behavior (in an eventually consistent system): Both transactions complete. But only one of the customers gets the product. The other customer is later sent an apologetic email that widget2 is out of stock.</p></div><div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" trbidi="on"><h4 class="graf graf--h4" name="c3da">Example 2: A distributed document sharing system with servers in New York, London, Tokyo</h4><p class="graf graf--p" name="eb16">Operation1: In London, User X creates a new empty document marked private.<br />Operation2. User X makes update 1 to document.<br />Operation3: User X deletes update 1.<br />Operation4: User X makes update 2.<br />Operation5: User X changes the document from private to public.<br />Due to network issues, only operations 1,2, 5 reach Tokyo. 3 and 4 do not.<br />In Tokyo, User Y tries to read the shared document.</p><p class="graf graf--p" name="4c40">Expected behavior: The document status is private and Y cannot read the document.<br />Actual behavior: Y is able to read the document but an incorrect version. The document has update1 which is deleted and is missing update2 which needs to be there.</p><p class="graf graf--p" name="4371">The problems above are known as consistency issues. Different clients are seeing different views of the data. What is the correct view ?</p><p class="graf graf--p" name="78db">Consistency here refers to C in the CAP theorem, not the C in ACID. Here <strong class="markup--strong markup--p-strong">Consistency means every thread in a concurrent application correctly reads the most recent write at that point in time.</strong></p><p class="graf graf--p" name="cc37">How do you fix the above issues ? In a single node database, Example1 can be fixed by locking the row in the inventory table during update and Example2 is not even an issue because all the data is in one node. But in a distributed application data might be split across shards and shards replicated for high availability. User of the system might connect to any shard/server and read/write data. With NoSql databases, the application has to handle any in consistencies.</p><p class="graf graf--p" name="faa5">In traditional RDBMSs , database developers are given a knob called isolation level to control what concurrent threads can read. <a class="markup--anchor markup--p-anchor" data-href="http://khangaonkar.blogspot.com/2010/11/database-isolation.html" href="http://khangaonkar.blogspot.com/2010/11/database-isolation.html" rel="noopener" target="_blank">In this old blog I explain what isolation levels are</a>. The safest isolation level is the SERIALIZABLE where the database behaves as if the transactions were executing in a serial order with no overlap, even though in reality they are executing concurrently. Most developers use the default isolation level which is generally READ_COMMITTED OR READ_REPEATABLE. In reality, these isolation levels are poorly documented and implemented differently by different vendors. The result is that in highly concurrent applications, there are consistency bugs even in traditional single node RDBMs. In a distributed database with data spread across shards and replicated for read scalability, the problem is compounded further. Most NoSql vendors punt the problem by claiming eventual consistency, meaning if there are no writes for a while, eventually all reads on all nodes will read the last write.</p><p class="graf graf--p" name="2683">Consistency is often confused with isolation, which describes how the database behave under concurrent execution of the transactions.<strong class="markup--strong markup--p-strong"> At the safest isolation level, the database behaves as if the transactions were executing in serial order, even though in reality they are executing concurrently. At the safest consistency level, every thread in a concurrent application correctly reads the most recent write.</strong> But most database documentations are not clear on how to achieve this in an application.</p><p class="graf graf--p" name="df11">The problems in examples 1 and 2 would not occur if those applications/databases had the notion of a global transaction order with respect to real time. In example 1, Pierre’s transaction at t2 should see the inventory as 0 because a transaction at t1 <t2 set it to zero. In example 2, Y should only be able to read upto operation2 . It should not be able to read operation5 without operations 3,4 which occured before 5.</p><p class="graf graf--p" name="af53">In database literature, the term for this requirement is called “Strict Serializability” or sometimes “external consistency”. Since this technical definitions can be confusing, it is often referred to as <strong class="markup--strong markup--p-strong">strong consistency</strong>.</p><p class="graf graf--p" name="beb5">2 research papers that have been around for a while provide answers on how this problems might be fixed. The papers are the <a class="markup--anchor markup--p-anchor" data-href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf" href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf" rel="noopener" target="_blank">Spanner</a> paper and the <a class="markup--anchor markup--p-anchor" data-href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf" href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf" rel="noopener" target="_blank">Calvin</a> paper.</p><p class="graf graf--p" name="f2ba">Their approach is solving the problem can summarized as follows:<br />1. timestamp transactions with something that reflect their occurrence in real time<br />2. Order transactions based on timestamp<br />3. Commit transactions in the above order.</p><p class="graf graf--p" name="221f">But the details of how they do it are significantly different. Let us look at how they do it.</p><h4 class="graf graf--h4" name="30bc"><a class="markup--anchor markup--h4-anchor" data-href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf" href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf" rel="noopener" target="_blank">Spanner paper from Google</a></h4><p class="graf graf--p" name="a6a6">Spanner is database built at Google and <a class="markup--anchor markup--p-anchor" data-href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf" href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf" rel="noopener" target="_blank">the paper</a> describes the motivation and design of Spanner. Spanners approach involves<br />1. The use of atomic clocks and GPS to synchronize clocks across hosts in different regions and the true time API to give accurate time across nodes, regions or continent.<br />2. For a read/write transaction, spanner calls the true time API to get a timestamp. To address overlaps between transactions that are close to each other, the timestamp is assigned after locks are acquired and before they are released. <br />3. The commit order equals timestamp order.<br />4. Read for particular timestamp is sent to any shard/replica that has the data at that timestamp.<br />5. Read without timestamp (latest read) are serviced by assigning a timestamp.<br />6. Writes that cross multiple shards use two phase commit.<br />And of course,<br />7. It can scale horizontally to 1000s of nodes by sharding.<br />8. Each shard is replicated.<br />And most importantly, <br />9. Even though, it is a key value store, it provide SQL support to make it easy for application programmers.<br /><a class="markup--anchor markup--p-anchor" data-href="https://www.cockroachlabs.com/" href="https://www.cockroachlabs.com/" rel="noopener" target="_blank">CockroachDb</a> and <a class="markup--anchor markup--p-anchor" data-href="https://www.yugabyte.com/" href="https://www.yugabyte.com/" rel="noopener" target="_blank">Yugabyte</a> are 2 commercial databases based on spanner.<br /><br /></p><p class="graf graf--p" name="1229"><strong class="markup--strong markup--p-strong"><a class="markup--anchor markup--p-anchor" data-href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf" href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf" rel="noopener" target="_blank">Calvin Paper</a></strong></p><p class="graf graf--p" name="1229"><b><br /></b>The <a class="markup--anchor markup--p-anchor" data-href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf" href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf" rel="noopener" target="_blank">Calvin paper</a> addresses the above problem using distributed consensus protocols like <a class="markup--anchor markup--p-anchor" data-href="https://raft.github.io/" href="https://raft.github.io/" rel="noopener" target="_blank">Raft</a> or Paxos. <br />1. Every transaction has to first go through distributed consensus and secure a spot in a linear replication log. <br />2. One can view the index in the log as the timestamp. <br />3. The committed entries in the replication log are then executed in the exact same serial order by every node in the distributed database. <br />4. Since the transaction log is replicated to every shard, it does not need or use two phase commit. In a transaction involving multiple shards, if a shard dies before committing a particular transaction, then on restart it just has to execute the uncommitted transaction from it replication log.<br />5. No dependency on wall clocks or time API.<br />6. No two phase commit.<br />7. No mention of SQL support.</p><p class="graf graf--p" name="2840"> <a class="markup--anchor markup--p-anchor" data-href="https://fauna.com/" href="https://fauna.com/" rel="noopener" target="_blank">FaunaDb</a> is an example of a database based on Calvin.</p><p class="graf graf--p" name="0c62">This class of databases that offer horizontal scalability on a global scale without sacrificing consistency is also called NewSql. </p><p class="graf graf--p" name="ce36">In summary, if you are a building a globally distributed application that needs strong consistency, doing it on your own with SQL or NoSql database can be non trivial. Consistency is hard enough in a single node database. But on a distributed database, consistency bugs are harder to troubleshoot and even harder to fix. You might want to consider one of the NewSql databases to make life easier. Review the Spanner and Calvin papers to understand the architectural choices that are available. This will help you pick a database that is right for you. Spanner and Calvin papers have been around for almost a decade. But they have become more relevant now as real databases based on them become more popular. Most importantly understand what is consistency is and apply it, for lack of which can cause severe correctness bugs in your application. </p><p class="graf graf--p" name="eecf"><strong class="markup--strong markup--p-strong">References:</strong></p><p class="graf graf--p" name="eecf"><a class="markup--anchor markup--li-anchor" data-href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf" href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf" rel="noopener" target="_blank">The Spanner paper</a></p><p class="graf graf--p" name="eecf"><a class="markup--anchor markup--li-anchor" data-href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf" href="http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf" rel="noopener" target="_blank">The Calvin paper</a></p><p class="graf graf--p" name="d0a8"><a class="markup--anchor markup--p-anchor" data-href="https://fauna.com/blog/introduction-to-transaction-isolation-levels" href="https://fauna.com/blog/introduction-to-transaction-isolation-levels" rel="noopener" target="_blank">Consistency and Isolation</a></p></div></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-16281835056349893662019-02-16T18:27:00.001-08:002019-02-16T18:27:31.559-08:00Dropwizard Tutorial : Building a simple microservice in JAVA<div dir="ltr" style="text-align: left;" trbidi="on">
khangaonkar.blogspot has moved to <a href="http://heavydutysoftware.com/">heavydutysoftware.com</a>.<br />
<br />
Please read continue reading the blog at <a href="http://www.heavydutysoftware.com/2019/02/14/dropwizard-tutorial-building-a-microservice-in-java/">http://www.heavydutysoftware.com/2019/02/14/dropwizard-tutorial-building-a-microservice-in-java/</a><br />
<pre style="font-family: menlo; font-size: 9pt; text-align: left;">
</pre>
<h2 style="background-color: white; font-family: "menlo"; font-size: 9pt;">
<br /></h2>
</div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-25030946154374736162019-01-20T18:31:00.000-08:002019-01-20T18:31:06.829-08:00A Microservices Introduction<div dir="ltr" style="text-align: left;" trbidi="on">
Modern distributed applications are built as a suite of microservices. In this blog we discuss the characteristics of microservices. We will also compare microservices to its predecessors like SOA and monolithic applications. We point out the benefits and downsides of a microservices architecture.<br />
<br />
<h2 style="text-align: left;">
1.0 Introduction</h2>
<br />
Let us start with a little bit of history and go back to late 90s or early 2000's. Web applications were monolithic. A single web container would serve the entire application. Even worse, a single web container would serve multiple applications. Not only was this not scalable, it was a development and maintenance nightmare. A single bug could bring multiple applications down. And there was an ownership issue. You had multiple teams/developers contributing code. When there was a bug, the ownership was not clear and bugs would bounce around among developers.<br />
<br />
Around mid 2000's the new buzz word was service oriented architecture SOA. This was promoted by large web application server companies. See my blog on <a href="http://khangaonkar.blogspot.com/2010/02/service-oriented-architecture.html">SOA</a> written in 2010. SOA encouraged number of good design philosophies such as interface based programming, loosely coupled applications, asynchronous interaction. REST, XML,JSON and messaging platforms enabled SOA. SOA was a big improvement, but the tools and deployment technologies were still heavyweight.<br />
<br />
The microservices architecture is the next step in evolution further improving the ideas from SOA.<br />
<br />
Many dismiss microservices as another buzz word. But having developed real world applications using tools listed in section 3.0, I see real value and benefit in this architecture.<br />
<br />
<h2 style="text-align: left;">
2.0 Description</h2>
<br />
The main idea around microservices is that large complex systems are easier to build, maintain and scale using independently built and owned smaller services that work together.<br />
<br />
Each microservice is a modular fine grained application providing a specific service. Let us say you have an application that has a UI , authentication, Apis for customer info, Apis for uploading documents, Apis for analytics. You may have a microservice for the UI, a microservice for customer apis, a document upload microservice, an analytics microservice.<br />
<br />
A microservice is fully functional.<br />
<br />
A microservice performs one specific business or IT function.<br />
<br />
The development of the microservice can be done independently.<br />
<br />
A microservice runs as its own process.<br />
<br />
A microservice communicates using common protocols such as REST/Http.<br />
<br />
A microservice offers services via its Apis. It can communicate with other microservices using their APIs.<br />
<br />
A microservice is deployable to production independently.<br />
<br />
When your application has multiple microservices, each could be developed in a different or the most suitable programming language or framework suited for that service.<br />
<br />
A microservice should scale horizontally by just running more instances of the microservice.<br />
<br />
Testing, bug fixing, performance tuning etc on the microservice should happen independently without affecting other microservices.<br />
<br />
The above listed characteristics make it easier to build large complex systems.<br />
<br />
<br />
<h2 style="text-align: left;">
3.0 Enabling Technologies</h2>
<br />
A number of newer frameworks have made building microservices easier.<br />
<br />
For Java programmers, <a href="http://dropwizard.io/">Dropwizard</a> and <a href="http://spring.io/projects/spring-boot">SpringBoot</a> are very useful frameworks for building microservices. The old way was monolithic application servers like websphere, weblogic , jboss etc. Dropwizard and SpringBoot turn the table by embedding the web server within your java application. Development is much easier as you are developing a plain java application with a main method. The entire microservice is packaged in one jar and can be run with the java -jar command. For additional information, please read my blog <a href="http://khangaonkar.blogspot.com/2018/04/tomcat-vs-dropwizard.html">comparing Dropwizard to Tomcat</a>. For Javascript, python and other languages there are similar frameworks.<br />
<br />
<br />
To start with microservices, a framework as mentioned above is all you need. Once you have developed and use several microservices, the following platforms may be useful.<br />
<br />
<a href="https://www.docker.com/">Docker</a> is a containerization technology that makes it easier to manage production deployments. This is of interest for a dev-ops person who has to roll out services to production.<br />
<br />
<a href="https://kubernetes.io/">Kubernetes</a> is platform for automation, deployment and scaling of containerized applications.<br />
<br />
<br />
<h2 style="text-align: left;">
4.0 Disadvantages</h2>
<br />
For smaller business and smaller applications, the overhead of many microservices could be a problem. If your infrastructure is one or two $20 per month VMs on AWS (or other cloud providers) you will not have enough memory/cpu/disk for multiple microservices.<br />
<br />
The increased network communication is a cost.<br />
<br />
Each microservice is its own process. The remote calls have a serialization/deserialization cost.<br />
<br />
<h2 style="text-align: left;">
5.0 Conclusion</h2>
<div>
<br /></div>
Microservices are a logical next step in the evolution of the development of complex applications.<br />
They are a best practice. But they are not a silver bullet that solve every problem.<br />
<br />
<h2 style="text-align: left;">
Contact:</h2>
<br />
If you like my blogs and would like to use my services, please visit my website<br />
<a href="http://www.heavydutysoftware.com/">http://www.heavydutysoftware.com</a><br />
<br />
<br />
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com4tag:blogger.com,1999:blog-5008017311510568944.post-9552720535726616692018-12-02T08:23:00.000-08:002018-12-02T08:23:56.834-08:00Apache kafka Streams <div dir="ltr" style="text-align: left;" trbidi="on">
Apache Kafka is a popular distributed messaging and streaming open source system. A key differentiator for Kafka is that its distributed broker architecture makes it highly scalable. Earlier versions of Kafka were more about messaging. I have a number of blogs on Kafka messaging some of which are listed below in the related blogs section.<br />
<br />
This blog introduces Kafka streams which builds on messaging.<br />
<br />
<h2 style="text-align: left;">
1.0 Introduction</h2>
<br />
In a traditional Kafka producer/consumer application, producers write messages to a topic and consumers consume the messages. The consumer may process the message and then write it to a database , filesystem or even discard it. For a consumer to write the message back to another topic, it has to create a producer.<br />
<br />
Kafka streams is a higher level library that lets you build a processing pipeline on streams of messages where each stream processor reads a message, does some analytics such as counting, categorizing , aggregation etc and then potentially writes a result back to another topic.<br />
<br />
<h2 style="text-align: left;">
2.0 Use Cases</h2>
<br />
Analytics from e-commerce site usage.<br />
<br />
Analytics from any distributed application.<br />
<br />
Distributed processing of any kind of event or data steams.<br />
<br />
Transforming from monolithic to micro-services architecture.<br />
<br />
Moving away from database intensive architectures.<br />
<br />
<h2 style="text-align: left;">
3.0 When to use Kafka streams ?</h2>
<br />
If yours is a traditional messaging application, where you need the broker to hold on to messages till they get processed by a consumer, then the producer/consumer framework might be suitable. Here kafka competes with ActiveMQ, Websphere MQ and other traditional message brokers. Here the processing of each message is independent of other messages.<br />
<br />
If yours is a analytics style application, where you have do different forms of counting, aggregation, slicing /dicing on a stream of data, then Kafka streams might be an appropriate library. Here the processing is for a set of messages in the stream. In this space. Kafka competes with analytics frameworks like Apache Spark, Storm, splunk etc.<br />
<br />
If you were to use producers/consumers for an analytics style application, you would end up creating many producers/consumer, you would probably have to read and write a database several time, you would need to maintain in memory state and probably use a third party library for analytics primitives. Kafka streams library makes all this easier for you.<br />
<br />
<h2>
4.0 Features</h2>
<br />
Some key features of Kafka streams are:<br />
<br />
Provides an API for stream processing primitives such counting, aggregation, categorization etc. API supports timing windows.<br />
<br />
Message processing is one at the time. In producer/consumer , messages are generally processed in batches.<br />
<br />
Fault tolerant local state is provided by the library. In consumers, any state has to be managed by the application.<br />
<br />
Supports exactly once or once and only once message message delivery. In producer/consumer, it is at least once delivery.<br />
<br />
No need to deal will lower level messaging concepts like partitions, producers, consumers, polling.<br />
<br />
Self contained complete library that handles both messaging and processing. No need for other third party libraries like Spark.<br />
<br />
<h2 style="text-align: left;">
5.0 Concepts </h2>
<br />
A stream is an unbounded sequence of Kafka messages on a topic.<br />
<br />
A stream processor is a piece of code that gets a message, does some processing on it, perhaps stores some in memory state and then writes to another topic for processing by another processor. This is also known as a node.<br />
<br />
A stream application is a set of processors where the output of one processor is further processed by one or more other processors. A stream application can depicted as graph with the processors as vertexes and streams/topics as edges.<br />
<br />
A source processor is the first node in the topology. It has no upstream processors and gets messages from a topic. A sink processor has no downstream processors and will typically write a result somewhere.<br />
<br />
The figure below shows a sample application topology<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_wpuw-b4DdAlWEAudM-XnOdG3f-WTpGNyolPscghrrIiacavTQnLSMvgIsva7NTLdKg-BuPfinEQINb_HkofWbOHQNOOsr4WtaCiV_CQY6cruQ9Tlf5vPIV1zsmYSipLbu45zU43LSQA/s1600/Kafka+Stream+Topology.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_wpuw-b4DdAlWEAudM-XnOdG3f-WTpGNyolPscghrrIiacavTQnLSMvgIsva7NTLdKg-BuPfinEQINb_HkofWbOHQNOOsr4WtaCiV_CQY6cruQ9Tlf5vPIV1zsmYSipLbu45zU43LSQA/s400/Kafka+Stream+Topology.jpg" width="400" /></a></div>
<br />
<br />
<br />
<h2 style="text-align: left;">
6.0 Programming model</h2>
<br />
2 core programming models. Below are some sample code snippets.<br />
<br />
<h3 style="text-align: left;">
6.1 Streams DSL</h3>
<div>
This is a higher level API build on top of the processor API. Great for beginners.<br />
<br />
KStream models the stream of messages. KTable is the in-memory story. You can convert from stream to table and vice versa.</div>
<br />
Example: Simple analytics on a stream of pageviews from e-commerce site<br />
<br />
<span style="color: purple;">// consume from a topic</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;">StreamBuilder builder = new StreamBuilder() ;</span><br />
<span style="color: purple;">KStream<string tring=""> pageViewlines = builder.stream("someTopic") ;</string></span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;">// From each line extract productid and create a table key=productid,value=count</span><br />
<span style="color: purple;">// We get page view count by product</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;">KTable<string ong=""> productCounts = pageViewlines.flatMapValues(value->getProduct(value))</string></span><br />
<span style="color: purple;">.groupBy((key,value)->value)</span><br />
<span style="color: purple;">.count() ;</span><br />
<br />
<span style="color: purple;">// write the running counts to another topic or storage</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;">productCounts.toStream.to("productCountsTopic",Produced.with(serdes.String(),serdes.Long()) ;</span><br />
<br />
<h3 style="text-align: left;">
6.2 Processor API</h3>
<br />
same example using processor API<br />
<br />
<span style="color: purple;">public class ProductFromPageViewProcessor implements Processor<string tring=""> {</string></span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> private KeyValueStore<string ong=""> pcountStore ;</string></span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> // Do any initialization here</span><br />
<span style="color: purple;"> // such as loading stores or scheduling punctuate</span><br />
<span style="color: purple;"> public void init(ProcessorContext context) {</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> // get the store that will store counts</span><br />
<span style="color: purple;"> pcountStore = (KeyValueStore)context.getStateStore("pcounts") ;</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> // schedule a punctuate to to periodically send the product counts to a downstream processor</span><br />
<span style="color: purple;"> // every 5 secs</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> context.schedule(5000,PunctuationType.STREAM_TIME,(timestamp)->{</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> // iterate over all values in the pcountStore</span><br />
<span style="color: purple;"> KeyValueIter iter = pCountStore.all()</span><br />
<span style="color: purple;"> while(iter.hasNext()) {</span><br />
<span style="color: purple;"> KeyValue<string ong=""> val = iter.next() ;</string></span><br />
<span style="color: purple;"> context.forward(val.key,val.value) ;</span><br />
<span style="color: purple;"> }</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> context.commit() ;</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> } ;</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"><br /></span>
<span style="color: purple;"> // Called once for every line or message on the consumed topic</span><br />
<span style="color: purple;"> public void process(String k, String line) {</span><br />
<span style="color: purple;"> String productId = getProductId(line) ;</span><br />
<span style="color: purple;"> Long count = pCountStore.get(productId) ;</span><br />
<span style="color: purple;"> if (count == null) {</span><br />
<span style="color: purple;"> pCountStore.put(productId,1) ;</span><br />
<span style="color: purple;"> } else {</span><br />
<span style="color: purple;"> pCountStore.put(productId,count+1) ;</span><br />
<span style="color: purple;"> }</span><br />
<span style="color: purple;"> }</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> }</span><br />
<span style="color: purple;">}</span><br />
<br />
<br />
<h2 style="text-align: left;">
7.0 Conclusion</h2>
<br />
As you can see from both API examples, it is about processing streams of data, doing some analytics and producing results. No need to poll or deal with lower level details like partitions and consumers. The streams model moves you away from legacy database intensive architectures, where data is written to a database first and then slow inefficient queries try to do analytics.<br />
<br />
<br />
Some disadvantages of Kafka Streams are:<br />
<br />
You are tied to Kafka and have to go through a Kafka topic. Other Streaming libraries like Spark are more generic and might have more analytics features.<br />
<br />
There are no ways to pause and resume a stream. If load suddenly increases or you want to pause the system for maintenance, there is no clean mechanism. In producer/consumer, there is explicit pause/resume API. Other streaming libraries also have some flow control mechanisms.<br />
<br />
<h2 style="text-align: left;">
8.0 Related Blogs</h2>
<div>
<a href="http://khangaonkar.blogspot.com/2014/04/apache-kafka-introduction-should-i-use.html">8.1 Apache Kafka Introduction</a></div>
<div>
<br /></div>
<div>
<a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">8.2 Apache kafka Basic tutorial</a></div>
<div>
<br /></div>
<div>
<a href="http://khangaonkar.blogspot.com/2015/01/apache-kafka-java-tutorial-3-once-and.html">8.3 Apache Kafka once and once delivery</a></div>
<div>
<br /></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-43300884224804363652018-08-26T16:02:00.000-07:002018-08-26T16:02:14.893-07:00ElasticSearch Tutorial<div dir="ltr" style="text-align: left;" trbidi="on">
ElasticSearch is a distributed , scalable, search and analytics engine.<br />
<br />
It is similar to Apache Solr with a difference that is built to be scalable from ground up.<br />
<br />
Like Solr, ElasticSearch is built on top of Apache Lucene which is a full text search library.<br />
<br />
What is difference between a database and a search engine ? Read <a href="https://khangaonkar.blogspot.com/2018/06/search-vs-database-do-i-need-search.html">this blog</a>.<br />
<br />
<h3 style="text-align: left;">
1.0 Key features</h3>
<br />
Based on very successful search library Apache Lucene.<br />
Provides the ablity to store and search documents.<br />
Supports full text search.<br />
Schema free.<br />
Ability to analyze data - count , summarize ,aggregate etc.<br />
Horizontally scalable and distributed architecture.<br />
REST API support.<br />
Easy to install and operate.<br />
API support for several languages.<br />
<br />
<h3 style="text-align: left;">
2.0 Concepts</h3>
An elasticsearch server process called a node is a single instance of a java process.<br />
<br />
A key differentiator for elasticsearch is that it was built to be horizontally scalable from ground up.<br />
<br />
In production environment, you generally run multiple nodes. A cluster is a collection of nodes that store your data.<br />
<br />
A document is a unit of data that can be stored in elasticsearch. JSON is the format.<br />
<br />
An Index is a collection of documents of a particular type. For example you might have one index for customer documents and another for product information. Index is the data structure that helps the search engine find the document fast. The document being stored is analyzed and broken into tokens based on rules. Each token is indexed - meaning - given the token -there is pointer back to the document - just like the index at the back of the book. Full text search or the ability to search on any token or partial token in the document is what differentiates a search engine from a more traditional database.<br />
<br />
Elasticsearch documentation sometimes use the term inverted index to refer to their indexes. This author believes that the term "inverted index" is just confusing and this is nothing but an index.<br />
<br />
In the real world, you never use just one node. You will use an elasticsearch cluster with multiple nodes. To scale horizontally, elasticsearch partitions the index into shards that get assigned to nodes. For redundancy, the shards are also replicated, so that they are available at multiple nodes.<br />
<br />
<h3 style="text-align: left;">
3.0 Install ElasticSearch</h3>
Download from <a href="https://www.elastic.co/downloads/elasticsearch">https://www.elastic.co/downloads/elasticsearch</a> the latest version of elasticsearch. You will download elasticsearch-version.tar.gz.<br />
<br />
Untar it to a directory of your choice.<br />
<br />
<h3 style="text-align: left;">
4.0 Start ElasticSearch</h3>
<br />
For this tutorial we will use just a single node. The rest of the tutorial will use curl to send http requests to a elasticsearch node to demonstrate basic functions. Most of it is self explanatory.<br />
<br />
To start elasticsearch type<br />
<br />
<span style="color: blue;">install_dir/bin/elasticsearch</span><br />
<br />
To confirm it is running<br />
<br />
<div style="text-align: left;">
<span class="s1" style="color: blue;">curl -X GET "localhost:9200/_cat/health?v"</span><br />
<span class="s1"><br /></span>
</div>
<h3 style="text-align: left;">
5.0 Create an index</h3>
<br />
Let us create a index person to store person information such as name , sex , age , person etc<br />
<br />
<div style="text-align: left;">
<span class="s1" style="color: blue;">curl -X PUT "localhost:9200/person"</span><span class="s1" style="background-color: white; color: purple;">{"acknowledged":true,"shards_acknowledged":true,"index":"person"}</span></div>
<div style="text-align: left;">
<span class="s1"><br /></span></div>
<div style="text-align: left;">
<span class="s1">List the indexes created so far</span></div>
<div style="text-align: left;">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1" style="color: blue;">curl -X GET "localhost:9200/_cat/indices?v"</span><br />
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1" style="color: purple;">health status index<span class="Apple-converted-space"> </span>uuid <span class="Apple-converted-space"> </span>pri rep docs.count docs.deleted store.size pri.store.size</span></div>
<div class="p1">
<span class="s1" style="color: purple;">yellow open <span class="Apple-converted-space"> </span>person <span class="Apple-converted-space"> </span>AJCSCg0gTXaX6N5g6malnA <span class="Apple-converted-space"> </span>5 <span class="Apple-converted-space"> </span>1<span class="Apple-converted-space"> </span>0<span class="Apple-converted-space"> </span>0<span class="Apple-converted-space"> </span>1.1kb<span class="Apple-converted-space"> </span>1.1kb</span></div>
<div style="text-align: left;">
<span class="s1">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</span></div>
<div class="p1">
<br /></div>
<div style="text-align: left;">
<h3 style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;">6.0 Add Documents</span></h3>
</div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;">Let us add a few documents to the person index.</span><br />
<span style="font-variant-ligatures: no-common-ligatures;">In the url, _doc is the type of document. It is way to group documents of a particular type</span><br />
<span style="font-variant-ligatures: no-common-ligatures;">In /person/_doc/1, the number 1 is the id of the document we provided. If we do not provide an id , elasticsearch with generate an id.</span><br />
<span style="font-variant-ligatures: no-common-ligatures;">You will notice that the data elasticsearch accepts is JSON.</span></div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div class="p1">
<span class="s1" style="color: blue;">curl -X PUT "localhost:9200/person/_doc/1" -H 'Content-Type: application/json' -d'</span></div>
<div class="p1">
<span class="s1" style="color: blue;">{</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"name": "Big Stalk",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"sex":"male",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"age":41,</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"interests":"Hiking Cooking Reading"</span></div>
<div class="p1">
<span class="s1" style="color: blue;">}</span></div>
<div class="p1">
<span class="s1" style="color: blue;">'</span></div>
<div class="p1">
<span class="s1" style="color: blue;">curl -X PUT "localhost:9200/person/_doc/2" -H 'Content-Type: application/json' -d'</span></div>
<div class="p1">
<span class="s1" style="color: blue;">{</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"name": "Kelly Kidney",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"sex":"female",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"age":35,</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"interests":"Dancing Cooking Painting"</span></div>
<div class="p1">
<span class="s1" style="color: blue;">}</span></div>
<div class="p1">
<span class="s1" style="color: blue;">'</span></div>
<div class="p2">
<span style="color: blue;"><span class="s1"></span><br /></span></div>
<div class="p1">
<span class="s1" style="color: blue;">curl -X PUT "localhost:9200/person/_doc/3" -H 'Content-Type: application/json' -d'</span></div>
<div class="p1">
<span class="s1" style="color: blue;">{</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"name": "Marco Dill",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"sex":"male",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"age":26,</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"interests":"Sports Reading Painting"</span></div>
<div class="p1">
<span class="s1" style="color: blue;">}</span></div>
<div class="p1">
<span class="s1" style="color: blue;">'</span></div>
<div class="p2">
<span style="color: blue;"><span class="s1"></span><br /></span></div>
<div class="p1">
<span class="s1" style="color: blue;">curl -X PUT "localhost:9200/person/_doc/4" -H 'Content-Type: application/json' -d'</span></div>
<div class="p1">
<span class="s1" style="color: blue;">{</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"name": "Missy Ketchat",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"sex":"female",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"age":22,</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"interests":"Singing Cooking Dancing"</span></div>
<div class="p1">
<span class="s1" style="color: blue;">}</span></div>
<div class="p1">
<span class="s1" style="color: blue;">'</span></div>
<div class="p2">
<span style="color: blue;"><span class="s1"></span><br /></span></div>
<div class="p1">
<span class="s1" style="color: blue;">curl -X PUT "localhost:9200/person/_doc/5" -H 'Content-Type: application/json' -d'</span></div>
<div class="p1">
<span class="s1" style="color: blue;">{</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"name": "Hal Spito",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"sex":"male",</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"age":31,</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"interests":"Sports Singing Hiking"</span></div>
<div class="p1">
<span class="s1" style="color: blue;">}</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff; min-height: 13.0px}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1" style="color: blue;">'</span></div>
<br />
<div style="text-align: left;">
<h3 style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;">7.0 Search or Query</span></h3>
</div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;">The query can be provided either as a query parameter or in the body of a GET. Yes, Elasticsearch accepts query data in the body of a GET request. </span><br />
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span>
<br />
<h4 style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;">7.1 Query string example</span></h4>
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;">To retrieve all documents:</span></div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div style="text-align: left;">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</div>
<div class="p1">
<span class="s1" style="color: blue;">curl -X GET "localhost:9200/person/_search?q=*"</span><br />
<span class="s1" style="color: blue;"><br /></span>
<span class="s1">Response is not shown to save space.</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">Exact match search as query string:</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1" style="color: blue;">curl -X GET "localhost:9200/person/_search?q=sex:female"</span><br />
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1" style="color: purple;">{"took":14,"timed_out":false,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0},"hits":{"total":2,"max_score":0.18232156,"hits":[{"_index":"person","_type":"_doc","_id":"2","_score":0.18232156,"_source":</span></div>
<div class="p1">
<span class="s1" style="color: purple;">{</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"name": "Kelly Kidney",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"sex":"female",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"age":35,</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"interests":"Dancing Cooking Painting"</span></div>
<div class="p1">
<span class="s1" style="color: purple;">}</span></div>
<div class="p1">
<span class="s1" style="color: purple;">},{"_index":"person","_type":"_doc","_id":"4","_score":0.18232156,"_source":</span></div>
<div class="p1">
<span class="s1" style="color: purple;">{</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"name": "Missy Ketchat",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"sex":"female",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"age":22,</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"interests":"Singing Cooking Dancing"</span></div>
<div class="p1">
<span class="s1" style="color: purple;">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</span></div>
<div class="p1">
<span class="s1" style="color: purple;">}</span></div>
<div class="p1">
<span class="s1"><br /></span>
<br />
<h4 style="text-align: left;">
<span class="s1">7.2 GET body examples</span></h4>
<span class="s1"><br /></span>
<span class="s1">Query syntax when sent as body is much more expressive and rich. It merits a blog of its own.</span></div>
<div class="p1">
<span class="s1">This query finds persons with singing and dancing in the interest field. This is full text search on a field.</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;">curl -X GET "localhost:9200/person/_search" -H 'Content-Type: application/json' -d'</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;">{</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> "query": {</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> "bool": {</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> "should": [</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> { "match": { "interests": "singing" } },</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> { "match": { "interests": "dancing" } }</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> ]</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> }</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> }</span></div>
<div class="p1">
<span class="s1" style="color: blue;"></span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;">}'</span></div>
<div class="p1">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div class="p1">
<span class="s1" style="color: purple;">{"took":15,"timed_out":false,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0},"hits":{"total":3,"max_score":0.87546873,"hits":[{"_index":"person","_type":"_doc","_id":"4","_score":0.87546873,"_source":</span></div>
<div class="p1">
<span class="s1" style="color: purple;">{</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"name": "Missy Ketchat",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"sex":"female",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"age":22,</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"interests":"Singing Cooking Dancing"</span></div>
<div class="p1">
<span class="s1" style="color: purple;">}</span></div>
<div class="p1">
<span class="s1" style="color: purple;">},{"_index":"person","_type":"_doc","_id":"5","_score":0.2876821,"_source":</span></div>
<div class="p1">
<span class="s1" style="color: purple;">{</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"name": "Hal Spito",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"sex":"male",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"age":31,</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"interests":"Sports Singing Hiking"</span></div>
<div class="p1">
<span class="s1" style="color: purple;">}</span></div>
<div class="p1">
<span class="s1" style="color: purple;">},{"_index":"person","_type":"_doc","_id":"2","_score":0.18232156,"_source":</span></div>
<div class="p1">
<span class="s1" style="color: purple;">{</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"name": "Kelly Kidney",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"sex":"female",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"age":35,</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"interests":"Dancing Cooking Painting"</span></div>
<div class="p1">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</div>
<div class="p1">
<span class="s1" style="color: purple;">}</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">Below is a range query on a field.</span><br />
<span class="s1"><br /></span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;">curl -X GET "localhost:9200/person/_search" -H 'Content-Type: application/json' -d'</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;">{</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> "query": {</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> "range": {</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> "age": [</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> { "gte": 30, "lte":40 }</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> ]</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> }</span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;"> }</span></div>
<div class="p1">
<span class="s1" style="color: blue;"></span></div>
<div class="p1">
<span style="color: blue; font-variant-ligatures: no-common-ligatures;">}'</span></div>
<div class="p1">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div class="p1">
<span class="s1" style="color: purple;">{"took":1,"timed_out":false,"_shards":{"total":5,"successful":5,"skipped":0,"failed":0},"hits":{"total":2,"max_score":1.0,"hits":[{"_index":"person","_type":"_doc","_id":"5","_score":1.0,"_source":</span></div>
<div class="p1">
<span class="s1" style="color: purple;">{</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"name": "Hal Spito",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"sex":"male",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"age":31,</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"interests":"Sports Singing Hiking"</span></div>
<div class="p1">
<span class="s1" style="color: purple;">}</span></div>
<div class="p1">
<span class="s1" style="color: purple;">},{"_index":"person","_type":"_doc","_id":"2","_score":1.0,"_source":</span></div>
<div class="p1">
<span class="s1" style="color: purple;">{</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"name": "Kelly Kidney",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"sex":"female",</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"age":35,</span></div>
<div class="p1">
<span class="s1" style="color: purple;"><span class="Apple-converted-space"> </span>"interests":"Dancing Cooking Painting"</span></div>
<div class="p1">
<span class="s1" style="color: purple;">}</span></div>
<div class="p1">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</div>
<div class="p1">
<span class="s1" style="color: purple;">}]}}</span></div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div style="text-align: left;">
<h3 style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;">8.0 Update a document</span></h3>
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span>
<br />
<div class="p1">
<span class="s1" style="color: blue;">$curl -X POST "localhost:9200/person/_doc/5/_update" -H 'Content-Type: application/json' -d'</span></div>
<div class="p1">
<span class="s1" style="color: blue;">{</span></div>
<div class="p1">
<span class="s1" style="color: blue;"><span class="Apple-converted-space"> </span>"doc": { "name": "Hal Spito Jr" }</span></div>
<div class="p1">
<span class="s1" style="color: blue;">}</span></div>
<span style="color: blue; font-variant-ligatures: no-common-ligatures;">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</span><br />
<div class="p1">
<span class="s1" style="color: blue;">'</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div style="text-align: left;">
<span class="s1">After executing the above update, do a search for "Jr". The above document will be returned.</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<br />
<h3 style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;">9.0 Delete a document</span></h3>
</div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span>
<span style="font-variant-ligatures: no-common-ligatures;">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</span><br />
<div class="p1">
<span class="s1" style="color: blue;">curl -X DELETE "localhost:9200/person/_doc/1"</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div style="text-align: left;">
<span class="s1">This will delete the document with id for 1. Any searches will not return this document anymore</span></div>
</div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div style="text-align: left;">
<h3 style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;">10. Delete Index</span></h3>
</div>
<div style="text-align: left;">
<span style="color: blue;">curl -X DELETE "localhost:9200/person"</span></div>
<div class="p1">
<span class="s1" style="color: purple;">{"acknowledged":true}</span><br />
<span class="s1"><br /></span>
That deletes the index we created.</div>
<div class="p1">
<span class="s1"><br /></span>
<br />
<h3 style="text-align: left;">
<span class="s1">11. Conclusion</span></h3>
<span class="s1"><br /></span>
<span class="s1">This has been a brief introduction of elasticsearch just enough to get you started. There are lot of more details in each category of APIs. We will explore them in subsequent APIs. </span></div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div style="text-align: left;">
<span style="font-variant-ligatures: no-common-ligatures;"><br /></span></div>
<div style="text-align: left;">
<span class="s1"><br /></span></div>
<div style="text-align: left;">
<span class="s1"><br /></span></div>
<div style="text-align: left;">
<span class="s1"><br /></span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<br />
<div style="text-align: left;">
<br /></div>
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-66992895789760720292018-06-23T15:30:00.000-07:002018-06-23T15:30:36.461-07:00Search vs Database : Do I need a search engine ?<div dir="ltr" style="text-align: left;" trbidi="on">
Since the beginning of time, applications have been developed with a database at backend to store application data.<br />
<br />
Relational databases like Oracle, Mysql etc took databases to the next level with the relation model, transaction, SQL. These are hugely successful for the last 30+ years.<br />
<br />
In the last 10+ years, Big data databases like HBase, Cassandra, MongoDb etc arrived to solve data at scale issues which was not handled by the relational databases. These databases handled scale, high availability and replication better than relational database.<br />
<br />
In the last 10 years, also available are search engines like Apache Solr and ElasticSearch that also store your data like a database, but offer much better search and analytics than a traditional database.<br />
<br />
So when do you use a database and when to use a search engine ? This is what is discussed in this blog. Or do you need both ?<br />
<br />
Some differences between a database and search engine are :<br />
<br />
<h3 style="text-align: left;">
1.0 Indexes</h3>
<br />
In a database, to search efficiently, you define indexes. But then you are required to search based on index key. If you search with some other fields, the index cannot be used and the search is inefficient.<br />
<br />
A search engine by default will index by all fields. This gives tremendous flexibility. If you add a new type of search to your application, you do not need a new index.<br />
<br />
<h3 style="text-align: left;">
2.0 Full text search</h3>
<div>
<br /></div>
A search engine excels at full text search.<br />
<br />
Say you have document one with line "Hello from england".<br />
And another document with line "Hello from england and europe".<br />
<br />
A search for the term "england" will return 2 documents. A search for term "europe" will return second document.<br />
<br />
Databases on the other hand are more convenient for exact value search.<br />
<br />
<h3 style="text-align: left;">
3.0 Flexible document format</h3>
<br />
Databases are limited in the structure of data - such row and columns or key/value pairs.<br />
<br />
Search engines generally consume a wider variety of documents. While json is the most popular format for documents that a search engine consumes, third party libraries are available to parse word docs, pdfs etc for consumption by search engines.<br />
<br />
<h3 style="text-align: left;">
4.0 Analysis and Mapping</h3>
<br />
Every document stored in a search engine goes through a process of analysis and mapping.<br />
<br />
So if you store a document "the Hello 21 from England on 2018-06-15 *", it make get tokenized based on space, certain tokens like * or "the" could get discarded, all the other tokens made lowercase, 21 recognized as a integer, 2018-06-15 recognized as a date.<br />
<br />
When you search, the search query goes through a similar process.<br />
<br />
The benefit of this process is that whether you search for Hello or hello or hElLo, the document is found. Whether you search for england or UK or Britain, the document is still found. Whether you search for 2018-06-15 or 15 July 2018, the document is still found.<br />
<br />
<h3 style="text-align: left;">
5.0 Write once read many times</h3>
<div>
<br /></div>
As mentioned above, search engine is very efficient for search and or in other words better for reading.<br />
<br />
However, the analysis and indexing and storage process for a search engine can be expensive. Update to a document could lead to reindexing.<br />
<br />
For this reason, search engines are better suited when your documents are written once, updated rarely, but need to be searched and read many times.<br />
<br />
<br />
<h3 style="text-align: left;">
6.0 Database better at OLTP</h3>
<br />
For reason mentioned above, Search engines become inefficient if the documents they store are updated frequently as would done in an online transaction processing system.<br />
<br />
A traditional database is more suited for such usage scenarios.<br />
<br />
Another place where a traditional database is better where ACID or even less transactional integrity is important.<br />
<br />
<h3 style="text-align: left;">
7.0 Analytics</h3>
<br />
The popular open source search engines ElasticSearch and Apache Solr have done a great job making it easy do analytics - from basic counting, aggregation, summarization, faceting etc.<br />
<br />
Analytics on data is much easier and powerful in a search engine than a database<br />
<br />
<h3 style="text-align: left;">
8.0 Summary</h3>
<br />
If<br />
<br />
your queries change frequently<br />
your need to search on fields that change<br />
you need to search on a large variety of fields<br />
you have variety of document formats<br />
you need full text search<br />
you need analytics<br />
your data access pattern is write/update few times but read many many times<br />
<br />
then, a search engine in your architecture will certainly help.<br />
<br />
Note that it does not have to be one or the other. Most modern architectures use both a database and search engine. Depending on the use case you may choose to store some data in database and other data in a search engine. Or you may choose for store your data in both a search engine for better querying and a database for transactional integrity.<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-30875668626269919852018-04-03T06:38:00.000-07:002018-04-03T06:38:05.800-07:00Tomcat vs Dropwizard<div dir="ltr" style="text-align: left;" trbidi="on">
For the last 15 years, for Java web applications, <a href="http://tomcat.apache.org/">Apache Tomcat</a> has been the gold standard as web application server.<br />
<br />
More recently, for cloud and micro services architecture, that require deployment of a large number of services, a number of newer frameworks are replacing traditional application servers like Tomcat.<br />
<br />
One such framework is <a href="http://www.dropwizard.io/">Dropwizard</a>. Instead of giving your application to a complex application server, Dropwizard brings an embedded HTTP server Jetty into your plain Java application and significantly simplifies the development model.<br />
<br />
While both enable you to achieve the same end goal of building Java web services and applications, they are different in many ways.<br />
<br />
<h3 style="text-align: left;">
1. Infrastructure</h3>
<div>
<br /></div>
With Tomcat, the web container infrastructure is separate from the application. Tomcat is a packaged separately and runs as it own process. The application is developed and packaged separately as a war. It is then deployed to the tomcat.<br />
<br />
Dropwizard on the other hand is a like a library that you add as a dependency to your application. Dropwizard bundles the web server Jetty that will be embedded in your application.<br />
<br />
<h3 style="text-align: left;">
2. Operating system processes</h3>
<div>
<br /></div>
With Tomcat, there is one Java process for many applications. It is more difficult to tune the JVM for production for issues like garbage collection, since they depend on application characteristic.<br />
<br />
With Dropwizard, there is one Java process for one application. Easier to tune the JVM. Process can be managed easily using linux tools.<br />
<br />
<h3 style="text-align: left;">
3. Development model</h3>
<br />
With tomcat, you code classes as per Servlets or JAX-RS specifications, but in the end, you produce a war file.<br />
<br />
With Dropwizard, the application you write is a normal java application that starts from the main method. You still code JAX-RS web resource class or Servlets (rare). But in the end you produce a simple jar and run the application by invoking the class that has the main method.<br />
<br />
<h3 style="text-align: left;">
4. Monolithic vs Micro services</h3>
<br />
With Tomcat , you can deploy multiple application wars to the same JVM. This can lead to a monolithic process that is running multiple applications. Harder to manage in production as application characteristics vary.<br />
<br />
With Dropwizard, then model is suited to building micro services. One process for one application or service. Since running is as simple as running a java class with a main method, you run one for each micro service. Easier to manage in production.<br />
<br />
<h3 style="text-align: left;">
5. Class loading</h3>
<br />
In addition to JVM provided bootstrap, extension and system class loaders, Tomcat has to have application class loaders to load classes from application wars and provide isolation between applications. While many tomcat developers never deal with this, it does sometimes lead to class loading issues.<br />
<br />
Dropwizard based applications have only the JVM provided class loaders unless the developer writes additional classloaders. This reduces complexity.<br />
<br />
<h3 style="text-align: left;">
6. Debugging and integration with Ide</h3>
<br />
Some IDEs claim to be able to do it. But given the resources Tomcat takes, debugging by running tomcat in the IDE is a real pain. Remote debugging is the only real option.<br />
<br />
With Dropwizard , you are developing just a plain JAVA application. So it real easy to run and debug the application from within the IDE.<br />
<br />
<h3 style="text-align: left;">
7. Fringe benefits</h3>
<div>
<br /></div>
<div>
In addition to Jetty, Dropwizard bundles number of other libraries like Jersey, Jackson, Guava, Logback that are necessary to web services development. It also provides a very simple yaml based configuration model for your application.</div>
<br />
For reasons mentioned above application servers based technologies having been dying for that last few years and Tomcat is not immune to the paradigm shift. If you are developing REST based micro-services for the cloud, Dropwizard is a compelling choice.<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-89901911183011901132017-12-03T09:30:00.000-08:002017-12-03T09:30:28.030-08:00MongoDb Query tutorial and cheatsheet<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
Mongodb querying is easy and very powerful. But it is handy to have a cheatsheet around when digging for data. In this tutorial, we list and describe some simple useful MongoDB queries.<br />
<br />
If you are new to Mongodb, you can read my <a href="http://khangaonkar.blogspot.com/2015/01/mongodb-tutorial-1-introduction.html">mongodb introduction</a>.<br />
<br />
At the bottom of this page, there is some example json representing some customers.<br />
<br />
Copy that to a file say customer.json.<br />
<br />
Import into your mongodb database using the command<br />
<br />
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">mongoimport --db yourtestdb --collection customer --file customer.json</span></div>
<br />
<br />
<h4 style="text-align: left;">
1. Find all documents in a collection</h4>
<br />
<div class="p1">
<span class="s1">> db.customer.find()</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "firstname" : "Dana", "lastname" : "Dealer", "age" : 60, "sex" : "F", "status" : "Y", "address" : { "city" : "Seattle", "state" : "WA" }, "favorites" : [ "yellow", "orange" ], "recent" : [ { "product" : "p5", "price" : 110 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccb"), "firstname" : "Dan", "lastname" : "RunsFra", "age" : 23, "sex" : "M", "status" : "N", "address" : { "city" : "LOS Angeles", "state" : "CA" }, "favorites" : [ "red", "organge" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p4", "price" : 8 } ] }</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccc"), "firstname" : "Mike", "lastname" : "North", "age" : 45, "sex" : "M", "status" : "Y", "address" : { "city" : "burlingame", "state" : "CA" }, "favorites" : [ "red", "blue" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">><span class="Apple-converted-space"> </span></span></div>
<br />
<h4 style="text-align: left;">
2. Find all documents based on 1 field equality</h4>
<br />
<br />
<div class="p1">
<span class="s1">> db.customer.find({"lastname":"Dealer"})</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "firstname" : "Dana", "lastname" : "Dealer", "age" : 60, "sex" : "F", "status" : "Y", "address" : { "city" : "Seattle", "state" : "WA" }, "favorites" : [ "yellow", "orange" ], "recent" : [ { "product" : "p5", "price" : 110 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<br />
<h4 style="text-align: left;">
3. Find all documents based on multiple fields AND</h4>
<br />
AND is implicit<br />
<br />
<div class="p1">
<span class="s1">> db.customer.find({"firstname":"Dana","lastname":"Dealer"})</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "firstname" : "Dana", "lastname" : "Dealer", "age" : 60, "sex" : "F", "status" : "Y", "address" : { "city" : "Seattle", "state" : "WA" }, "favorites" : [ "yellow", "orange" ], "recent" : [ { "product" : "p5", "price" : 110 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<br />
Same query with explicit $and operator<br />
<br />
<div class="p1">
<span class="s1">> db.customer.find({$and : [{"firstname":"Dana"},{"lastname":"Dealer"}]})</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "firstname" : "Dana", "lastname" : "Dealer", "age" : 60, "sex" : "F", "status" : "Y", "address" : { "city" : "Seattle", "state" : "WA" }, "favorites" : [ "yellow", "orange" ], "recent" : [ { "product" : "p5", "price" : 110 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<br />
<h4 style="text-align: left;">
4. Multiple fields OR</h4>
<br />
<div class="p1">
<span class="s1">db.customer.find({$or : [{"sex":"F"},{status:"N"}]})</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "firstname" : "Dana", "lastname" : "Dealer", "age" : 60, "sex" : "F", "status" : "Y", "address" : { "city" : "Seattle", "state" : "WA" }, "favorites" : [ "yellow", "orange" ], "recent" : [ { "product" : "p5", "price" : 110 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccb"), "firstname" : "Dan", "lastname" : "RunsFra", "age" : 23, "sex" : "M", "status" : "N", "address" : { "city" : "LOS Angeles", "state" : "CA" }, "favorites" : [ "red", "organge" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p4", "price" : 8 } ] }</span></div>
<br />
<h4 style="text-align: left;">
5. Comparison operator</h4>
<br />
<div class="p1">
<span class="s1">db.customer.find({"age":{$lt:30}} <span class="Apple-converted-space"> </span>)</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccb"), "firstname" : "Dan", "lastname" : "RunsFra", "age" : 23, "sex" : "M", "status" : "N", "address" : { "city" : "LOS Angeles", "state" : "CA" }, "favorites" : [ "red", "organge" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p4", "price" : 8 } ] }</span></div>
<br />
<div class="p1">
<span class="s1">db.customer.find({"age":{$gt:50}} <span class="Apple-converted-space"> </span>)</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "firstname" : "Dana", "lastname" : "Dealer", "age" : 60, "sex" : "F", "status" : "Y", "address" : { "city" : "Seattle", "state" : "WA" }, "favorites" : [ "yellow", "orange" ], "recent" : [ { "product" : "p5", "price" : 110 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<br />
<br />
<h4 style="text-align: left;">
6. Embedded document nested field</h4>
<br />
<br />
<div class="p1">
<span class="s1">db.customer.find({"address.state":"CA"})</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccb"), "firstname" : "Dan", "lastname" : "RunsFra", "age" : 23, "sex" : "M", "status" : "N", "address" : { "city" : "LOS Angeles", "state" : "CA" }, "favorites" : [ "red", "orange" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p4", "price" : 8 } ] }</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccc"), "firstname" : "Mike", "lastname" : "North", "age" : 45, "sex" : "M", "status" : "Y", "address" : { "city" : "burlingame", "state" : "CA" }, "favorites" : [ "red", "blue" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<br />
<h4 style="text-align: left;">
7. Array element</h4>
<br />
<div class="p1">
<span class="s1">db.customer.find({"favorites":"blue"})</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccc"), "firstname" : "Mike", "lastname" : "North", "age" : 45, "sex" : "M", "status" : "Y", "address" : { "city" : "burlingame", "state" : "CA" }, "favorites" : [ "red", "blue" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<br />
<h4 style="text-align: left;">
8. Array of embedded docs</h4>
<br />
<div class="p1">
<span class="s1">db.customer.find({"recent.price":{$gt:90}})</span></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
<br />
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "firstname" : "Dana", "lastname" : "Dealer", "age" : 60, "sex" : "F", "status" : "Y", "address" : { "city" : "Seattle", "state" : "WA" }, "favorites" : [ "yellow", "orange" ], "recent" : [ { "product" : "p5", "price" : 110 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<h4 style="text-align: left;">
<span style="font-family: "times"; font-size: small;">9. Project only certain fields - such as only lastname</span></h4>
<div class="p1">
<span style="font-family: "times"; font-size: small;"><br /></span></div>
<div class="p1">
<span class="s1">db.customer.find({},{"lastname":1})</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "lastname" : "Dealer" }</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccb"), "lastname" : "RunsFra" }</span></div>
<div class="p1">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccc"), "lastname" : "North" }</span></div>
<div class="p1">
<span style="font-family: "times"; font-size: small;"><br /></span></div>
<div class="p1">
<span style="font-family: "times"; font-size: small;"><br /></span></div>
<div class="p1">
<span style="font-family: "times"; font-size: small;"><br /></span></div>
<h4 style="text-align: left;">
<span style="font-family: "times"; font-size: small;">10. Sort</span></h4>
<div class="p1">
<span style="font-family: "times"; font-size: small;"><br /></span></div>
<div class="p1">
<span style="font-family: "times"; font-size: small;">Ascending by age</span></div>
<div class="p1">
<span style="font-family: "times"; font-size: small;"><br /></span></div>
<div class="p1">
<span class="s1">db.customer.find({}).sort({"age":1})</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccb"), "firstname" : "Dan", "lastname" : "RunsFra", "age" : 23, "sex" : "M", "status" : "N", "address" : { "city" : "LOS Angeles", "state" : "CA" }, "favorites" : [ "red", "organge" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p4", "price" : 8 } ] }</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccc"), "firstname" : "Mike", "lastname" : "North", "age" : 45, "sex" : "M", "status" : "Y", "address" : { "city" : "burlingame", "state" : "CA" }, "favorites" : [ "red", "blue" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<div class="p1">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "firstname" : "Dana", "lastname" : "Dealer", "age" : 60, "sex" : "F", "status" : "Y", "address" : { "city" : "Seattle", "state" : "WA" }, "favorites" : [ "yellow", "orange" ], "recent" : [ { "product" : "p5", "price" : 110 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">Descending by age </span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">db.customer.find({}).sort({"age":-1})</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314ccca"), "firstname" : "Dana", "lastname" : "Dealer", "age" : 60, "sex" : "F", "status" : "Y", "address" : { "city" : "Seattle", "state" : "WA" }, "favorites" : [ "yellow", "orange" ], "recent" : [ { "product" : "p5", "price" : 110 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccc"), "firstname" : "Mike", "lastname" : "North", "age" : 45, "sex" : "M", "status" : "Y", "address" : { "city" : "burlingame", "state" : "CA" }, "favorites" : [ "red", "blue" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p2", "price" : 66 } ] }</span></div>
<div class="p1">
<span class="s1">
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
</style>
</span></div>
<div class="p1">
<span class="s1">{ "_id" : ObjectId("5a22eae84427950fd314cccb"), "firstname" : "Dan", "lastname" : "RunsFra", "age" : 23, "sex" : "M", "status" : "N", "address" : { "city" : "LOS Angeles", "state" : "CA" }, "favorites" : [ "red", "organge" ], "recent" : [ { "product" : "p1", "price" : 85 }, { "product" : "p4", "price" : 8 } ] }</span></div>
<br />
<br />
<br />
<h4 style="text-align: left;">
Appendix 1 : Sample data</h4>
<br />
{<br />
<span style="white-space: pre;"> </span>"firstname": "Mike",<br />
<span style="white-space: pre;"> </span>"lastname": "North",<br />
<span style="white-space: pre;"> </span>"age": 45,<br />
<span style="white-space: pre;"> </span>"sex": "M",<br />
<span style="white-space: pre;"> </span>"status": "Y",<br />
<span style="white-space: pre;"> </span>"address": {<br />
<span style="white-space: pre;"> </span>"city": "burlingame",<br />
<span style="white-space: pre;"> </span>"state": "CA"<br />
<span style="white-space: pre;"> </span>},<br />
<span style="white-space: pre;"> </span>"favorites": ["red", "blue"],<br />
<span style="white-space: pre;"> </span>"recent": [{<br />
<span style="white-space: pre;"> </span>"product": "p1",<br />
<span style="white-space: pre;"> </span>"price": 85<br />
<span style="white-space: pre;"> </span>}, {<br />
<span style="white-space: pre;"> </span>"product": "p2",<br />
<span style="white-space: pre;"> </span>"price": 66<br />
<span style="white-space: pre;"> </span>}]<br />
}<br />
{<br />
<span style="white-space: pre;"> </span>"firstname": "Dan",<br />
<span style="white-space: pre;"> </span>"lastname": "RunsFra",<br />
<span style="white-space: pre;"> </span>"age": 23,<br />
<span style="white-space: pre;"> </span>"sex": "M",<br />
<span style="white-space: pre;"> </span>"status": "N",<br />
<span style="white-space: pre;"> </span>"address": {<br />
<span style="white-space: pre;"> </span>"city": "LOS Angeles",<br />
<span style="white-space: pre;"> </span>"state": "CA"<br />
<span style="white-space: pre;"> </span>},<br />
<span style="white-space: pre;"> </span>"favorites": ["red", "orange"],<br />
<span style="white-space: pre;"> </span>"recent": [{<br />
<span style="white-space: pre;"> </span>"product": "p1",<br />
<span style="white-space: pre;"> </span>"price": 85<br />
<span style="white-space: pre;"> </span>}, {<br />
<span style="white-space: pre;"> </span>"product": "p4",<br />
<span style="white-space: pre;"> </span>"price": 8<br />
<span style="white-space: pre;"> </span>}]<br />
}<br />
{<br />
<span style="white-space: pre;"> </span>"firstname": "Dana",<br />
<span style="white-space: pre;"> </span>"lastname": "Dealer",<br />
<span style="white-space: pre;"> </span>"age": 60,<br />
<span style="white-space: pre;"> </span>"sex": "F",<br />
<span style="white-space: pre;"> </span>"status": "Y",<br />
<span style="white-space: pre;"> </span>"address": {<br />
<span style="white-space: pre;"> </span>"city": "Seattle",<br />
<span style="white-space: pre;"> </span>"state": "WA"<br />
<span style="white-space: pre;"> </span>},<br />
<span style="white-space: pre;"> </span>"favorites": ["yellow", "orange"],<br />
<span style="white-space: pre;"> </span>"recent": [{<br />
<span style="white-space: pre;"> </span>"product": "p5",<br />
<span style="white-space: pre;"> </span>"price": 110<br />
<span style="white-space: pre;"> </span>}, {<br />
<span style="white-space: pre;"> </span>"product": "p2",<br />
<span style="white-space: pre;"> </span>"price": 66<br />
<span style="white-space: pre;"> </span>}]<br />
}<br />
<br />
<br />
Related Blogs :<br />
<br />
<a href="http://khangaonkar.blogspot.com/2015/01/mongodb-tutorial-1-introduction.html">1. Mongo DB Introduction</a><br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-75080711675575750342017-09-30T14:24:00.001-07:002017-09-30T14:24:11.583-07:00Cloud service vs Software as a service<div dir="ltr" style="text-align: left;" trbidi="on">
Everyday we use some awesome cloud services or applications like Gmail, Whatsapp, Waze etc.<br />
<br />
If I write a web application and put it on a server that I rent from a hosting service at $3.99 a month, is it a cloud service or is it "software as a service" ?. Or is it just a plain vanilla web application ?<br />
<br />
Even if I am write a modern application, and it is hosted on AWS or google cloud, does that automatically make it a "cloud" application ?<br />
<br />
Today, no software company says, we are "software as a service". Everyone says they have a cloud service.<br />
<br />
In this blog, I describe the characteristics that makes an application a real "cloud" application.<br />
<br />
An example of a real cloud application is Gmail. As long as I have a connection to the internet, I am always able to access my mail. I can access it from any browser, any mail client, any phone, any device. I can access my email from any place in the world. A billion other people trying to access their emails at the same time does not affect me. I can still do my email stuff. If I try to get an email that I got 10 years ago, even though I am communicating with some server on the west coast, that may not have that data, gmail will get the data from a server that has stored that email. If that server is down, gmail will get it from another server in the same data center that has a replica of the data. If the entire data center is down, gmail will get it from another data center in the same region. If the entire region is down, gmail might get my email from a server in a data center in completely different region say Europe.<br />
<br />
The characteristics of a real cloud service are :<br />
<h3 style="text-align: left;">
(1) Location independence</h3>
A user of a cloud service must be able to use the service from any location without any degradation in service.<br />
<br />
If the service has just one server in mountain view, then when I travel to China, accessing it is going to be horribly slow.<br />
<br />
The location independence comes from geographically distributing servers and replicating data to where it is served.<br />
<br />
<h3 style="text-align: left;">
(2) Scale horizontally</h3>
<br />
As the service becomes popular and the number of users go up, the number of requests go up, the data size goes up, there should be no degradation in service. It should scale by adding more servers.<br />
Load balancers will distribute requests to a clusters of servers.<br />
<br />
<h3 style="text-align: left;">
(3) Highly available</h3>
<br />
Service should be available 24*7. You have data replication and redundancy built in. A failure of a server and even a data center should not lead to stoppage of service<br />
<br />
<h3 style="text-align: left;">
(4) Device independence</h3>
<br />
You should be able to access the service from any device that can access the internet - browser, mobile device, IOT etc.<br />
<br />
<h3 style="text-align: left;">
(5) Self healing</h3>
<br />
The service infrastructure should monitor itself , detect failures early , so that down times are minimal<br />
<br />
<h3 style="text-align: left;">
(6) Commodity hardware and (open source software)</h3>
<br />
Given the scale of a real cloud service, even for the large companies, it is affordable only using commodity hardware and software.<br />
<br />
<b>(7) Micro services</b><br />
<br />
The software is generally built as micro services that communicate using simple protocols like REST. Monolithic applications are harder to maintain and fix.<br />
<br />
Gmail, amazon shopping website, Waze, Whatspp etc are examples of real cloud applications. Under the hood they are powered by real cloud scale infrastructures.<br />
<br />
The good news for the rest of us building cloud applications is that we do not have to build every thing from scratch. There are 2 broad options<br />
<h3 style="text-align: left;">
<br />Option 1 : Rent physical cloud but build software and data infrastructure</h3>
First there is the physical cloud : You needs machines either physical or virtual on the internet, distributed and across many regions. This part can be rented from Cloud vendors like Amazon, Google, Microsoft and others. You will not want to build a physical cloud unless you are close to being another Google or Amazon.<br />
<br />
Then there is the data and software part. These are the micro service you build, the distributed databases and message brokers you use. You do the management of data , the replication, the software scaling. There are many open source frameworks , databases , caches, message brokers to help.<br />
<br />
A good approach is to build and test the software locally with characteristics listed above and then deploy to the physical cloud for production.<br />
<br />
The advantage of this approach is the your service will work on a physical cloud from any vendor. It works even if you decide to run it off the internet or "in premise"/intranet.<br />
<br />
<h3 style="text-align: left;">
Option 2: Rent platform as a service</h3>
If you prefer not to deal with infrastructure, cloud vendors have combined the physical cloud and software into "platform as a service". Google App engine or AWS lamda , RDS are examples of this.<br />
Here the cloud vendor manages both the physical cloud and software infrastructure and you will write just the application code. The downside of this approach is vendor lock in. This is appropriate if you do not have the relevant expertise for option 1.<br />
<br />
<h3 style="text-align: left;">
Summary</h3>
<br />
In summary a "real" cloud application is one that scales horizontally and is highly available with the same quality of service irrespective of where the user is, what device he uses or how many users are using the service at a time. Simply writing a monolithic application and putting it on amazon ec2 or google compute is not a cloud service.<br />
<br />
However if you design and build your application with the characteristics listed above, your application is "cloud" ready. You can deploy it to a physical cloud anytime.<br />
<br />
<br />
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-7342979810249957212017-09-16T15:53:00.000-07:002017-09-16T15:54:41.704-07:00Cache consistency issues in distributed applications<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
Your typical enterprise web application is<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPK2PiL6r3HDIi8wJcbTmmCJpDAamney-hYchI_8oIiTujFs9-G94CHhU_OHeTGyZYZSNPotA3fXH23fArTNxSaE7a_ulZ3XsJgyRZNK_y0CLe3qHDuxa4e3hCOnCxelDSYe_3KiXgJIA/s1600/web+app+%25281%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="464" data-original-width="799" height="231" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPK2PiL6r3HDIi8wJcbTmmCJpDAamney-hYchI_8oIiTujFs9-G94CHhU_OHeTGyZYZSNPotA3fXH23fArTNxSaE7a_ulZ3XsJgyRZNK_y0CLe3qHDuxa4e3hCOnCxelDSYe_3KiXgJIA/s400/web+app+%25281%2529.jpg" width="400" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
Going to the database for every read or write is expensive. Developers try to improve read performance by storing values in a cache like memcached or redis.<br />
<br />
Cache is in memory storage. Performance is greatly improved by reading from memory than going to secondary storage like disk where database or files.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinitj21_vS-AE72Ns758yxLYdU-SwrONlfFlueUGA9N4AlkCMV7ZKvPQ0wxYAQGFICLQsMX666h8kdYB8suC0Ld3nbViUjjnvIMPmjwrgbdNi9eQ7TFjkz3MPQfexxitQQ6GezQNOcOA0/s1600/web+app+%25282%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1016" height="282" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinitj21_vS-AE72Ns758yxLYdU-SwrONlfFlueUGA9N4AlkCMV7ZKvPQ0wxYAQGFICLQsMX666h8kdYB8suC0Ld3nbViUjjnvIMPmjwrgbdNi9eQ7TFjkz3MPQfexxitQQ6GezQNOcOA0/s400/web+app+%25282%2529.jpg" width="400" /></a></div>
<br />
<br />
On reads, the application first checks cache. If the value is found in cache, it read from there. On a cache miss, the app will read from database and then update the cache so the subsequent reads do not go to the database.<br />
<br />
On writes,the application needs to write to the database and update the cache as well, so the subsequent reads get the updated value.<br />
<br />
The approach of using a cache to improve read performance works very well when your reads greatly outnumber writes. That is say most requests are reading ( say 80%) and few requests update the data.<br />
<br />
Frequent writes or updates to data complicate matters. Any writes to the database need to be reflected in the cache.<br />
<h2 style="text-align: left;">
<br />1.0 Common mistakes with caches:</h2>
<br />
These problems are mostly caused by multiple clients threads (improperly) updating the cache.<br />
<br />
<h3 style="text-align: left;">
1.1. Race condition between reader / writer threads</h3>
Thread 1 wants to read a value.<br />
It goes to cache and does not find it.<br />
It reads the value from DB<br />
<br />
Thread 2 updates the value in DB and updates the cache<br />
<br />
Thread 1 sets the outdated value in cache.<br />
Until there is another update to the same value, every one is reading the outdated value.<br />
<br />
<br />
<h3 style="text-align: left;">
1.2 Race condition between writer threads</h3>
<br />
Minor variation of 1.1<br />
<br />
At time t1, thread1 updates database value x to x1<br />
<br />
At time t2, thread2 update database value to x2.<br />
thread2 updates cache value to x2.<br />
<br />
thread1 overwrites x2 to x1.<br />
<br />
Subsequent readers are reading an incorrect value x1.<br />
<br />
Soln : locking x in cache, update database, update cache , release lock on x<br />
downside : locking in 2 places cache and db deadlocks<br />
<br />
<h3 style="text-align: left;">
1.3 Cache not cleaned up on database rollback</h3>
This happens when cache is updated prior to database transaction commit.<br />
<br />
thread 1 update value in db<br />
before the transaction commits, it updates the cache<br />
transaction rolls back<br />
cache has outdated value<br />
<br />
<h3 style="text-align: left;">
1.4 Reading before commit</h3>
This is a rare situation that could happen when cache is updated post database transaction commit.<br />
<br />
Thread 1 is in the process of updating a value x.<br />
x is uncommitted.<br />
cache is not updated.<br />
<br />
Other parts of code in Thread read the value from cache for other purposes. They reading an out dated value.<br />
<br />
Soln: A thread that needs to reuse values it changed should store values locally and use from local until the value is committed to both database and cache.<br />
<br />
<h2 style="text-align: left;">
2.0 Strategies for elimination cache race conditions :</h2>
<h3 style="text-align: left;">
2.1 Locking the value in cache</h3>
<br />
The strategy is<br />
<br />
-- lock the value to be updated in cache<br />
-- update in database<br />
-- update in cache<br />
-- unlock the cache lock<br />
<br />
While this can work, the disadvantage of this approach is<br />
<br />
-- locking twice. Database transaction does some locking. Now we have additional locking in cache. Negative for performance<br />
-- Improper locking can lead to deadlocks<br />
<br />
<h3 style="text-align: left;">
2.2 Checking timestamps and/or previous values</h3>
<br />
In the cache , in addition to value, store the update timestamp from db.<br />
Before updating the cache, check the timestamp and only update if you have a latter timestamp.<br />
<br />
If you do not want the overhead of storing timestamp in cache, another approach could<br />
<br />
-- 1 previous value = read the cache value before db operation<br />
-- 2 do the database operation<br />
3 new value = get the latest db value<br />
-- 4 compare and swap : set new value in cache, if current cache value == previous value<br />
-- 5 if 4 succeeded , we are done <br />
-- 6 previous value = current cache. Goto 3<br />
<br />
<h3 style="text-align: left;">
2.3 Update cache using an updater thread</h3>
<br />
Any thread with a db operation like create , update, or even a read after a cache miss, does not directly update the cache.<br />
<br />
Instead the request to update cache is put on a queue. Another thread reads the message one by one and updates the cache.<br />
<br />
A disadvantage is that there is time delay before the updated value is available in cache. Also in the case of cache misses, you might see multiple messages in the queue for the same cache update.<br />
<br />
This is the preferred solution. If you can tolerate the time delay, it can eliminate race conditions and is easy to implement.<br />
<br />
<h3 style="text-align: left;">
2.4 Versioning</h3>
<br />
We can steal ideas from MVCC which is used in database<br />
<br />
The locking strategy in 1 locks both readers and writers.<br />
<br />
We can improve on this by not requiring reads to locks.<br />
<br />
Readers reads the latest snapshot value.<br />
Writers lock not the value but a copy of the value. We allow only one copy additional writers will be blocked.<br />
When the write is done with update ( commit), the updated copy is copied to the snapshot.<br />
<br />
You can reduce the locking on writer even further by each writer his copy. Also assign say a version or transaction id to each copy. When a transaction commits, copy the value to snapshot.<br />
<br />
<h2 style="text-align: left;">
3.0 Conclusion</h2>
<br />
In summary, consistency problems can arise due to multiple threads updating a cache and the backing database. Option 3 , updating the cache using a single update thread and fix these issues. This is a simple solution that will work for most scenarios. Option 2 is a non locking technique. Option 1 locking is the least scalable.Option 4 versioning is the most work to implement. <br />
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-77409164142804499632017-07-04T14:50:00.000-07:002017-07-04T14:50:54.806-07:00Distributed Consensus: Raft <div dir="ltr" style="text-align: left;" trbidi="on">
<br />
In the <a href="https://khangaonkar.blogspot.com/2016/11/distributed-systems-basic-paxos.html">Paxos</a> blog, we discussed the distributed consensus problem and how the Paxos algorithm describes a method for a cluster of servers to achieve consensus on a decision.<br />
<br />
However the Paxos as a protocol is hard to understand and even harder to implement.<br />
<br />
Examples of problems that need consensus are :<br />
<br />
- Servers needing to agree on a value, such as whether a distributed lock is acquired or not.<br />
- Servers needing to agree on order of events.<br />
- Server need to agree on the state of a configuration value<br />
- Any problem where you need 100% consistency in a distributed environment.<br />
<br />
in a highly available environment.<br />
<br />
The raft algorithm described<a href="https://raft.github.io/"> https://raft.github.io</a>/ solves the same problem, It is easier to understand and implement.<br />
<br />
<h2 style="text-align: left;">
Overview</h2>
The key elements of raft are:<br />
<br />
Leader Election<br />
Log Replication<br />
Consistency ( in the spec they refer to this as safety)<br />
<br />
A server in the cluster is the leader. All other servers are followers.<br />
<br />
The leader is elected by majority vote.<br />
<br />
Every consensus decision begins with the leader sending a value to followers.<br />
<br />
If a majority of followers respond having received the value, the leader commits the value and then tells all servers to commit the value.<br />
<br />
Clients communicate with leader only.<br />
<br />
If a leader crashes, another leader is elected. Messages between leaders and followers enable the followers to determine if leader is still alive.<br />
<br />
If a follower does not receive messages from the leader for a certain period, it can try to become a leader by soliciting votes.<br />
<br />
If multiple leaders try to get elected at the same time, it is possible there is no majority. In such situations, the candidates try to get elected again after a random delay.<br />
<br />
In Raft, time is set of sequential terms. Term is time of certain length. Leadership is for a term.<br />
<br />
2 Main RPC messages between leader and followers :<br />
<br />
Request Vote : sent when a candidate solicits vote.<br />
Append Entry : sent by leader to replicate a log entry<br />
<br />
<h2 style="text-align: left;">
Scenario 1 : leader election cluster start up</h2>
let us say 5 servers s1 to s5.<br />
<br />
Every server is a follower.<br />
<br />
No one is getting messages from a leader.<br />
<br />
s1 and s3 decide to become leaders (called candidates) and send message to other servers to solicit vote.<br />
Servers always vote for themselves.<br />
s2 and s4 respond to s1.<br />
s1 is elected leader<br />
<br />
<h2 style="text-align: left;">
Scenario 2 : log replication</h2>
<br />
A client connects to a leader s1 to set x = 3.<br />
<br />
s1 writes x=3 to its log. But its state is unchanged.<br />
<br />
At this point the change is uncommitted.<br />
<br />
s1 sends appendEntry message to all followers that x =3. Each follower writes that entry to log.<br />
<br />
Followers respond to s1 that change is written to log.<br />
<br />
When majority of followers respond, s1 commits the change. It applies the change to its state so x is now 3.<br />
<br />
Followers are told to commit by piggybacking the last committed entry, in the next appendEntry message. Followers commit the entry by applying the change to its state.<br />
<br />
When a change is committed, all previous changes are considered committed.<br />
<br />
The next appendEntry message from leader to followers will include the previous committed entry. The servers can they commit any previous entries they have not yet committed.<br />
<br />
The cluster of servers has consensus.<br />
<h2 style="text-align: left;">
<br />Scenario 3 : Leader goes down</h2>
<br />
When a leader goes down, one or more of the followers can detect that there are no messages from the leader and decide to be come a candidate by soliciting votes.<br />
<br />
But the leader that just went down has the accurate record of committed entries, that some of the followers might not have.<br />
<br />
If a follower that was behind on committed entries became a leader, it could force other servers with later committed entries to overwrite their entries. That should not be allowed. Committed entries should never change.<br />
<br />
Raft prevents this situation by requiring candidate to send with the requestVote message the term and index of the latest message it accepted from the previous leader. Each Follower rejects requestVote with a term/index lower than its highest term/index.<br />
<br />
Since a leader only commits entries accepted by a majority of servers and a majority of servers is required to get elected, it follows that a majority or a least half of the remaining servers have accepted that highest committed entry of the leader that went down.Thus a follower that does not have the highest committed entry from the previous leader can never get elected.<br />
<br />
<h2 style="text-align: left;">
Scenario 4 : Catch up for servers that have been down</h2>
<div>
<br /></div>
It is possible for a follower to miss committed entries either because it went down or did not receive the message. <br />
<br />
To ensure followers catch up and stay consistent with leaders, RAFT has a consistency check.<br />
<br />
Every append entry message also includes term and index of previous message in leaders log. If it does not match in the follower, the follower rejects the new message. When AppendEntry is accepted by a server, it means leader and server have identical entries.<br />
<br />
<br />
When a follower rejects an appendEntry, the server retries that follower with a previous entry. This continues until the follower accepts an entry. Once an entry is accepted, the leader will again send subsequent entries that will be accepted. <br />
<br />
Leader maintains a nextIndex for each follower. This is the index in the log that the leader will send to the follower next.<br />
<h2 style="text-align: left;">
Scenario 5 : Cluster membership changes</h2>
<div>
<br /></div>
Cluster membership changes refers to a bunch of servers being added or removed from the cluster.<br />
May be even the current leader is no longer in new configuration.<br />
<br />
This needs some attention because it is possible we end up with 2 leaders and 2 majorities.<br />
<br />
Raft takes a two phase approach<br />
<br />
First switch to joint consensus<br />
-- entries committed to servers in both configuration<br />
-- 2 leaders one from each configuration<br />
-- majority from each configuration needs to approve stuff<br />
<br />
Second switch to new configuration<br />
<br />
Leader receives request to change configuration to new.<br />
Leader uses appendEntry to send old,new config pair to followers<br />
Once (old,new) is committed , we are in joint consensus period.<br />
Leader then sends appendEntry for new configuration.<br />
Once committed, new configuration is in effect.<br />
<br />
<h2 style="text-align: left;">
Summary</h2>
<div>
Consensus is required when you need 100% consistency in a distributed environment that is also highly available. Raft simplifies distributed consensus by breaking the problem into leader election and log replication. Easier to understand means easier to implement and use to solve real world problems.</div>
<div>
<br />
A future blog will go into an implementation.</div>
<div>
<br />
References:</div>
<div>
<br /></div>
<div>
1. In Search of an Understandable Consensus Algorithm by Diego Ongaro and John Ousterhout
Stanford University. https://raft.github.io/raft.pdf</div>
<div>
<br /></div>
<div>
Related Blogs:</div>
<div>
<a href="https://khangaonkar.blogspot.com/2016/11/distributed-systems-basic-paxos.html"><br /></a></div>
<div>
<a href="https://khangaonkar.blogspot.com/2016/11/distributed-systems-basic-paxos.html">1. Distributed Consensus : PAXOS</a></div>
<h3 class="post-title entry-title" itemprop="name" style="background-color: white; color: #222222; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 22px; font-stretch: normal; font-weight: normal; line-height: normal; margin: 0.75em 0px 0px; position: relative;">
</h3>
<div>
<br /></div>
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-9312967087274101262016-11-12T15:19:00.000-08:002016-11-12T15:19:40.250-08:00Distributed Systems : Basic Paxos<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<b>1.0 Introduction</b><br />
<br />
How to build reliable highly available distributed systems that are consistent ? Paxos is a protocol that addresses this problem.<br />
<br />
Paxos was authored by Leslie Lamport in his paper "Part time parliament" and explained better in his paper "Paxos made simple". It has been implemented and used in many of the modern distributed systems built by Google, Amazon, Microsoft etc.<br />
<br />
Consider a banking system with clients c1,c2 and server s.<br />
<br />
c1 can issue command to s : add $200 to account A<br />
c2 can issue command: add 2% interest to A<br />
<br />
A single server can easily determine the order c1,c2 and execute the commands.<br />
<br />
But if S crashes, no client can do any work. The traditional way to solve this problem is to fail over using standby. Another server S' , identical to S is standing around doing nothing. When S crashes, the system detects that S is no longer servicing requests, starts sending requests to S'. For
reasons that merit a blog of its own, fail over using standby is hard to
implement , to test and more expensive. That will not be discussed here.<br />
<br />
A second limitation is that when the number of client requests increase, the server may not be able to keep up and respond in a reasonable time.<br />
<br />
The way to solve scalability and fail over issues is to have multiple servers says s1 and s2 servicing clients with replication between s1 ans s2, so the clients are presented with a single system view. Commands that execute on s1 due to its clients are also made to execute on s2 and vice versa, so that both s1 and s2 are in the same state.<br />
<br />
In Figure 1 Both S1 and S2 are active and servicing clients.<br />
<br />
<span style="color: blue;">Figure 1 : Multiple server cluster with replication </span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgd8KK2IV_F388UlxnCGbJ5-K-YLLr_uEmdfxnWOXPG74hAUrvRJ-pIncdqLiitH-1oqcjEW0NQHUsL2jsnfcRUT_R3-sUWC3weqevTxuCkYbKcQKJXIsKy2j7WtiqXDeVLJmhPe_LjAxI/s1600/Paxos+_+problem.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgd8KK2IV_F388UlxnCGbJ5-K-YLLr_uEmdfxnWOXPG74hAUrvRJ-pIncdqLiitH-1oqcjEW0NQHUsL2jsnfcRUT_R3-sUWC3weqevTxuCkYbKcQKJXIsKy2j7WtiqXDeVLJmhPe_LjAxI/s400/Paxos+_+problem.png" width="400" /></a></div>
<br />
<br />
<br />
However a difference in the order of execution can lead to a consistency issue.<br />
<br />
Assume account A has $1000 <br />
Say c1 connects to s1 invokes command subtract 200 from account A.<br />
Say c2 connects to s2 and invokes command debit 5% interest to account A.<br />
<br />
If the order of execution is c1,c2 then c1 increases A to 1200. c2 increases it to 1260. If the order of execution is c2,c1, then c2 increases A to 1250. c1 increases it to 1250. One server may have a value 1260 , while the other may have a value 1250.<br />
<br />
To ensure consistent results, both servers need to agree on the order of execution. In other words, there needs to be consensus among the servers. You can have more then 2 servers and the same is true.<br />
<br />
Paxos is a protocol for achieving consensus among a group of servers.<br />
<br />
<b>2.0 Paxos assumptions</b><br />
<br />
A group of distributed servers communicating with each other.<br />
<br />
Asynchronous communication<br />
<br />
non byzantine : no devious unpredictable stuff<br />
<br />
Messages can be lost.<br />
<br />
A majority of servers are always available. So your system should have an odd number of server 3,5,7... so that a majority can be established. Common logic tells us that with an odd number of servers, any two majorities should have at least one overlapping member. This observation is critical to the correctness of the protocol.<br />
<br />
Server can join or leave the system at any time.<br />
<br />
<b>3.0 What Paxos achieves</b><br />
<br />
Reliable system with unreliable components.<br />
<br />
Only one value may be chosen.<br />
<br />
The value is chosen when it chosen by a majority of the servers.<br />
<br />
Once a value is chosen, it cannot be changed.<br />
<br />
This "one value" concept can be hard to understand for a first time reader. How is choosing just one value useful in solving any real world problems ? In reality , the value is likely to be a command that needs to be executed on the server. It could be command and data or both. Value is a simplification.<br />
<br />
For this to be useful, the servers probably need consensus on not just one value but several values. That can be achieved by a minor extension to basic Paxos and will be discussed in a subsequent blog.<br />
<br />
<b>4.0 Actors in Paxos</b> <br />
<br />
Proposers propose a value to be chosen. Proposers are generally the ones handling client requests.<br />
<br />
Acceptors respond to proposers and can be part of the majority that lead to a value being chosen.<br />
<br />
Learners learn the chosen value and may put it to some use.<br />
<br />
In reality, a single server may function as all 3 and this is what we will assume <br />
<br />
<b>5.0 The protocol </b><br />
<br />
(1) Proposer proposes a value (n,v) where n is a proposal number and v is a value.<br />
<br />
(2) If an acceptor has not received any other proposal, it sends a response agreeing to not accept<br />
any other proposals with number less than n.<br />
<br />
If proposal number is less than what it has accepted or agreed to accept, it can ignore the proposal.<br />
<br />
If it has other lower number proposals accepted with value v or any other v', it responds with the accepted proposal number and value v'.<br />
<br />
The acceptor continues to do this with any subsequent proposal it receives. It must remember the highest proposal number it has. This is important because as described in step 5, it should never accept any lower numbered proposals.<br />
<br />
(3)The proposer examines responses to its proposal.<br />
<br />
If majority of acceptors responded with value v', then mean that v' is either chosen or has a good chance of being chosen. Proposer must take v' as value . If majority does not have value, it can stay with original v.<br />
<br />
(4) Proposer sends accept message (n,v')<br />
<br />
(5) When an acceptor receives an accept message (n,v) or (n,v') . It must accept the value if n is still the highest proposal it has.<br />
<br />
Between step 2 and 5, other proposers could have send other proposals with number higher than n. If the acceptor has any such proposals, it cannot accept n.<br />
<br />
In either case, it returns to the proposer, the highest proposal number it has.<br />
<br />
(6) The proposer can use the proposal number in response from 5 to determine if its accept message is accepted. If the proposal number is the same as n, then it known that n is accepted.<br />
<br />
Other wise the returned number is that of the larger proposal number that is around. It has to go back to step 1 and start with new proposal number greater than this.<br />
<br />
<b>6.0 Notes</b><br />
<br />
Proposals have order as indicated by n. Newer proposals override older proposals. If an acceptor has received proposal n. It can ignore all proposals less than n.<br />
<br />
Proposal numbers need to be unique across proposers. <br />
<br />
Multiple rounds of propose and accept may be necessary before a majority for a chosen value is reached.<br />
<br />
Once a value is chosen, future proposers will also have to choose that value. That is the only way we can get to one and only one value chosen.<br />
<br />
Proposals are ordered. Older ones are ignored or rejected <br />
<br />
<b>7.0 Examples </b><br />
<br />
In this section we go through some scenarios of how the protocol works. <br />
<br />
<u>7.1 Case 1 : Value chosen for the first time </u><br />
<br />
<span style="color: blue;">Figure 2 : Value chosen for first time</span><br />
<u><br /></u><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGS2XuCM1slxvTSNcOw0uFirzBz_1oBYLyUrVK6KWsHpSzJpP_1eTmE2qVeBzlmYQOzWmfM3SFmZdTgRHUj9j5fyRjG2XdJXRNmjtQCT07S1V-iHTfA8JevesfDjiEhGFyzrqpHeImj4w/s1600/Paxos+-+value+chosen+for+first+time.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGS2XuCM1slxvTSNcOw0uFirzBz_1oBYLyUrVK6KWsHpSzJpP_1eTmE2qVeBzlmYQOzWmfM3SFmZdTgRHUj9j5fyRjG2XdJXRNmjtQCT07S1V-iHTfA8JevesfDjiEhGFyzrqpHeImj4w/s640/Paxos+-+value+chosen+for+first+time.jpg" width="640" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
This is the most basic case of no value yet chosen and a value proposed for the first time.<br />
<br />
3 servers s1,s2,s3. 2 is majority<br />
<br />
1. s1 sends proposal 1 with value X to s2<br />
2. s2 has no previous proposal or value , so it responds agreeing to not accept any proposals numbers less than 1<br />
3. s1 has agreement from majority. So it sends accept message to s2 which accepts and X is the chose value.<br />
<br />
<br />
<br />
<u>7.2 Case 2 : Value proposed after one already chosen</u><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir2tytQqus7QnU-gzzji2G00HXIoIOvw13qhde4_aJlEagV5hbUCGOJjbfvkD4zkrePDDfJnIdpT8TaoRj5yxzlGqDM66kvzm_WsqPPv6X3gmtlhlem1EEiEUPb57tPUOeVhPcwkxM7X0/s1600/Paxos++-+value+proposed+after+one+already+chosen%25281%2529.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir2tytQqus7QnU-gzzji2G00HXIoIOvw13qhde4_aJlEagV5hbUCGOJjbfvkD4zkrePDDfJnIdpT8TaoRj5yxzlGqDM66kvzm_WsqPPv6X3gmtlhlem1EEiEUPb57tPUOeVhPcwkxM7X0/s400/Paxos++-+value+proposed+after+one+already+chosen%25281%2529.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<span style="color: blue;">Figure 3 : Value proposed after one chosen</span> <br />
s1 and s2 have agreed on value X.<br />
<br />
s3 does not know of this. s3 send proposal 2 with value Y to s2.<br />
<br />
s1 responds that it has accepted proposal 1 with value X.<br />
<br />
s2 has to update its value to X. s2 sends an accept message with value X which is accepted.<br />
<br />
s1,s2,s3, all have value X.<br />
<br />
<u>7.3 Case 3: No value yet chosen two competing proposals 1 wins</u><br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1gdtewmvLdCmczHDnO4V2mYAjjnpN6V4Ntf__29doyS0Liq_tWpdwcmVLNM9dvMZaxZqRdoSexpSyGDrKW2ACgtpE0DkOmtsm7elZesRgmJm_G1Kax9KuwzArqEF3FmQTBejlJq1J7jI/s1600/Paxos++-+no+value+chosen+2+competing+proposals.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1gdtewmvLdCmczHDnO4V2mYAjjnpN6V4Ntf__29doyS0Liq_tWpdwcmVLNM9dvMZaxZqRdoSexpSyGDrKW2ACgtpE0DkOmtsm7elZesRgmJm_G1Kax9KuwzArqEF3FmQTBejlJq1J7jI/s400/Paxos++-+no+value+chosen+2+competing+proposals.png" width="400" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<span style="color: blue;">Figure 4: Competing proposals</span><br />
<br />
There are 5 servers s1,s2,s3,s4,s5. 3 is majority<br />
<br />
s1 sends proposal 1 with value X to s2,s3<br />
s2 agrees to accept 1,X<br />
Before s3 receives accept message for (1,X) s5 sends (2,Y) to s3,s4<br />
Now s3 cannot accept (1,x) because its highest proposal is 2. <br />
s3,s4 respond agreement to (2,Y) to s5<br />
s3 ignores proposal (1,X)<br />
s5 sends accept(2,Y) to s3,s4 which accept<br />
s1 sends a new proposal (3,X).<br />
s3 responds (2,Y)<br />
s1 sends accept(3,Y)<br />
s1 and s2 also agree on Y <br />
<br />
<br />
<u>7.4 Case 4 : Not making progress or liveness</u><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie6zwtHMfDGfonhU6HODc17H2wyb2WEScBm0GO2ojGagRxZYnUBl_y7ogUILmD8FAoCjinfwEt5B65nsUMgXw0H-3NUtKa0AhjyuxH3M24E40d5OH_57sZF5cRVSluDtRt90LB906rL14/s1600/Paxos++-+not+making+progress+liveness.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie6zwtHMfDGfonhU6HODc17H2wyb2WEScBm0GO2ojGagRxZYnUBl_y7ogUILmD8FAoCjinfwEt5B65nsUMgXw0H-3NUtKa0AhjyuxH3M24E40d5OH_57sZF5cRVSluDtRt90LB906rL14/s400/Paxos++-+not+making+progress+liveness.png" width="400" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<span style="color: blue;">Figure 5 : Not making progress </span><br />
<br />
s1 proposes values to s2 ,s3. s5 proposes values to s3,s4.<br />
<br />
s1 proposes (1,X). s3 agrees to not accept proposal less than 1.<br />
Before (1,X) can be accepted s5 proposes (2,Y). s3 now agrees not to accept less than 2<br />
When s1 tries to get (1,X) accepted, It will not get accepted because there is a proposal 2. It sends out a proposal (3,x).<br />
s2 will not be able to get (2,Y) accepted because there is a (3,X). It sends outs a (4,Y).<br />
s1 will not be able to get (3,X) accepted because there is a (4,Y). It sends out a (5,x)<br />
<br />
This may go on and on. One way to avoid this is for each server to introduce a random delay before issuing the next proposal, there by givings the others a chance to get their proposals accepted.<br />
<br />
Another solution is to have a leader among the servers and have the leader be the only one that issues proposals.<br />
<br />
<b>8.0 Paxos usage in real world </b><br />
<br />
The basic protocol enable servers to arrive at a consensus on one value. How does one value apply to real world system ? To solve real world problems like the one described in the introduction, you have run multiple instances or interactions of Paxos. For a group of servers to agree on the order of a set of commands, think of a list of command 0 .. n. Each Paxos instance would pick a command at each index. This is multi Paxos and merits a blog or discussion on its own.<br />
<br />
Some real world usages of Paxos have been to arrive at consensus on locks, configuration changes, counters. <br />
<br />
<b>9.0 References</b><br />
<br />
"Part time parliament" by Leslie Lamport<br />
"Paxos made Simple" by Leslie Lamport<br />
"Time, clocks and the ordering of events in a distributed system" by Leslie Lamport<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-25870849613834237162015-10-20T18:11:00.000-07:002015-10-20T18:11:15.948-07:00JAVA 8 : Lambdas tutorial<div dir="ltr" style="text-align: left;" trbidi="on">
Lambdas are the biggest addition to JAVA in not just release 8 but several releases. But when you look at the cryptic lambda syntax, like most regular programmers, you are left wondering why one should write code this way. 6. The purpose of this tutorial is to introduce lambdas, so that you can start using them in real code.<br />
<br />
<u><b>Overview</b></u><br />
<br />
Lambdas facilitate defining, storing and passing as parameters blocks of code. They may be stored in variables for later use or passed as parameters to methods who may invoke the code. This style of programming is known as functional programming.<br />
<br />
You might argue that JAVA already supported functional programming using anonymous classes. But that approach is considered verbose.<br />
<br />
<u><b>Example</b></u><br />
<br />
Listing 1 shows the old way to pass executable code to a thread.<br />
<br />
<span style="background-color: white;"> <span style="color: purple;"> public void Listing1_oldWayRunnable() {</span></span><br />
<span style="color: purple;"><span style="background-color: white;"> Runnable r = new Runnable() {</span></span><br />
<span style="color: purple;"><span style="background-color: white;"> @Override</span></span><br />
<span style="color: purple;"><span style="background-color: white;"> public void run() {</span></span><br />
<span style="color: purple;"><span style="background-color: white;"> System.out.println("Hello Anonymous") ;</span></span><br />
<span style="color: purple;"><span style="background-color: white;"> }</span></span><br />
<span style="color: purple;"><span style="background-color: white;"> } ;</span></span><br />
<span style="color: purple;"><span style="background-color: white;"> Thread t = new Thread(r) ;</span></span><br />
<span style="color: purple;"><span style="background-color: white;"> t.start() ;</span></span><br />
<span style="color: purple;"><span style="background-color: white;"> }</span></span><br />
<br />
Listing 2 shows the new way using lambdas.<br />
<br />
<span style="color: purple;">public void Listing2() {</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> Thread t = new Thread(()->System.out.println("Hello Lambdas")) ;</span><br />
<span style="color: purple;"> t.start() ;</span><br />
<span style="color: purple;"> }</span><br />
<br />
Listing 2 has no anonymous class. It is much more compact.<br />
<br />
<span style="color: purple;">()->System.out.println</span> is the lambda.<br />
<br />
<u><b>Syntax</b></u><br />
<br />
The syntax is<br />
<br />
(type)->statement<br />
Where type is the parameter passed in. In our example, there was no parameter. Hence the syntax was ()->statement<br />
<br />
If you had multiple parameters, the syntax would be <br />
(type1,type2)->statement<br />
<br />
If you had multiple statements, the syntax would be a<br />
(type) ->{statement1; statement2} ;<br />
<br />
<u><b>Storing in a variable</b></u><br />
<br />
The lambda expression can also be stored in variable and passed around as shown in listing 3.<br />
<br />
<span style="color: purple;"> public void Listing3() {</span><br />
<span style="color: purple;"> Runnable r = ()->System.out.println("Hello functional interface") ;</span><br />
<span style="color: purple;"> Thread t = new Thread(r) ;</span><br />
<span style="color: purple;"> t.start() ;</span><br />
<span style="color: purple;"> }</span><br />
<br />
<u><b>Functional interface</b></u> <br />
<br />
JAVA 8 introduces a new term functional interface. It is an interface with just one abstract method that needs to be implemented. The lambda expression provides the implementation for the method. For that reason, lambda expressions can be assigned to variables that are functional interfaces. In the example above Runnable is the functional interface.<br />
<br />
You can create new functional interfaces. They are ordinary interfaces but with only one abstract method. @FunctionalInterface is an annotation that may be used to document the fact that an interface is functional. <br />
<br />
Listing 5 show the definition and usage of a functional interface.<br />
<br />
<span style="color: purple;">@FunctionalInterface</span><br />
<span style="color: purple;"> public interface Greeting {</span><br />
<span style="color: purple;"> public void sayGreeting() ;</span><br />
<span style="color: purple;"> }</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> public static void greet(Greeting s) {</span><br />
<span style="color: purple;"> s.sayGreeting();</span><br />
<span style="color: purple;"> }</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> @Test</span><br />
<span style="color: purple;"> public void Listing5() {</span><br />
<span style="color: purple;"> // old way</span><br />
<span style="color: purple;"> greet(new Greeting() {</span><br />
<span style="color: purple;"> @Override</span><br />
<span style="color: purple;"> public void sayGreeting() {</span><br />
<span style="color: purple;"> System.out.println("Hello old way") ;</span><br />
<span style="color: purple;"> }</span><br />
<span style="color: purple;"> }) ;</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;"> // lambda new way</span><br />
<span style="color: purple;"> greet(()->System.out.println("Hello lambdas")) ;</span><br />
<span style="color: purple;"> }</span><br />
<span style="color: purple;">}</span><br />
<br />
Once again you can see that the code with lambdas is much more compact. Within an anonymous class, the "this" variable resolves to the anonymous class. But within a lambda, the this variable resolves to the enclosing class.<br />
<br />
<u><b>java.util.Function</b></u><br />
<br />
The java.util.Function package in JDK 8 has several starter ready to use functional interfaces. For example the Consumer interface takes a single argument and returns no result. This is widely used in new methods in the java.util.collections package. Listing 6 shows one such use with the foreach method added to Iterable interface, that can be used to process all elements in a collection.<br />
<br />
<span style="color: purple;">@Test</span><br />
<span style="color: purple;"> public void Listing6() {</span><br />
<span style="color: purple;"> List</span><integer><span style="color: purple;"> l = Arrays.asList(1,2,3,4,5,6,7,8,9) ;<br /> l.forEach((i)->System.out.println(i*i)) ;<br /> }</span><br /> </integer><br />
In summary, Java 8 lambdas introduce a new programming style to java. It attempts to bring JAVA up to par with other languages that claim to be superior because they support functional programming. It is not all just programming style. Lambdas do provide some performance advantages. I will examine them more in future blogs.<br />
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-20992709092225014032015-07-20T20:19:00.000-07:002015-07-20T20:19:44.945-07:00ConcurrentHashMap vs ConcurrentSkipListMap<div dir="ltr" style="text-align: left;" trbidi="on">
In the blog <a href="http://khangaonkar.blogspot.com/2010/06/what-java-map-class-should-i-use.html">Map classes</a>, we discussed the map classes in java.util package. In blog <a href="http://khangaonkar.blogspot.com/2012/10/java-synchronized-hashmap-vs.html">ConcurrentHashMap</a>, we ventured into concurrent collections and discussed the features of ConcurrentHashMap, which offers much superior concurrency than a conventional HashMap.<br />
<br />
In this blog we discuss another concurrent map, the ConcurrentSkipListMap and compare it with ConcurrentHashMap. Package java.util has a HashMap and TreeMap. Have you ever wondered why java.util.concurrent has a ConcurrentHashMap, but no ConcurrentTreeMap and why there is a ConcurrentSkipListMap ?<br />
<br />
In the non concurrent Collections, there is a HashMap and TreeMap. HashMap for O(1) time complexity and TreeMap for maintaining a sorted order but O(logn) complexity. The implementation of a tree map is not a ordinary binary search tree(BST), because a BST that is not balanced degrades in performance to O(n) for input that is already sorted. TreeMap is implemented as a Red black tree, whose implementation is complex and involves balancing the tree (moving the nodes around) when nodes are added or removed. The complexity is even more when you try to make the implementation concurrent (safe for concurrent use). For that reason there is no ConcurrentTreeMap in java.util.concurrent.<br />
<br />
A concurrent implementation of SkipList is simpler. Hence, for a Map that is ordered and concurrent,the implementators choose SkipList.<br />
<br />
<u><b>What is a Skip List ?</b></u><br />
<br />
A skiplist is an ordered linked list with o(log n) worst case search time. An ordinary linked list has o(n) worst case search time. A skip list provides faster search by maintaining layers of links, allowing the search to skip nodes. As shown in the figure, the lowest layer is an ordinary linked list. But each higher layer skips some (more) nodes.<br />
<br />
level4 10-------------------------------------100-null<br />
level3 10-----------------50-----------------100-null<br />
level2 10-------30------ 50-----70---------100-null<br />
level1 10 -20 -30 -40 -50 -60-70-80-90-100-null<br />
<br />
Let us you need to find 80 in the list.<br />
Start are highest level 4. Search linearly to find the node that is equal to or whose next node is greater than 80. At level 4, 100 is greater than 80. So at node 10, move down to level 3.<br />
<br />
At level 3, node 10, 50 is less than 80. Move to node 50. Next node 100 is greater that 50. Move down to level 2 at node 50.<br />
<br />
At level 2 node 50, next node is 70 which is less than 80. Move to node 70. Next node is 100 which is greater than 80. Move to level 1 at node 70.<br />
<br />
At level 1, this is the last level. Keep going forward from 70 till you find 80 or reach end of the list.<br />
<br />
Adding more levels can leads to faster search.<br />
<br />
Skiplist has O(logn) performance for search, insert and delete. Depending on number of levels, it does use some extra space. Space complexity is O(nlogn).<br />
<br />
In general, you will use a ConcurrentHashMap, if you must have O(1) for both get and put operations, but do not care about the ordering in the collection. You will use a ConcurrentSkipListMap if you need an ordered collection (sorted), but can tolerate O(logn) performance for get and put.<br />
<br />
Lastly, SkipList is easier to implement than a balanced tree and is become the data structure of choice for ordered concurrent Map. <br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-312262554644193072015-05-19T10:41:00.000-07:002015-05-19T10:41:23.150-07:00Apache Cassandra: Compaction<div dir="ltr" style="text-align: left;" trbidi="on">
In <a href="http://khangaonkar.blogspot.com/2013/09/cassandra-vs-hbase-which-nosql-store-do.html">Cassandra vs HBase</a>, I provided an an overview of Cassandra. In <a href="http://khangaonkar.blogspot.com/2013/10/apache-cassandra-data-model.html">Cassandra data model</a>, I covered data modeling in Cassandra. In this blog, I go a little bit into Cassandra internals and discuss Compaction, a topic that is a source of grief for many users. Very often you hear that during compaction, performance degrades. We will discuss what compaction is, why it is necessary and the different types of compaction.<br />
<br />
Compaction is process of merging multiple SSTables into larger tables. It removes data that has been marked for deletion and reduces fragmentation. Generally it happens automatically in the background, but can be started manually as well. <br />
<br />
<u><b>Why is compaction necessary ?</b></u><br />
<br />
Cassandra is optimized for writes. A write is first written in memory to a table called Memtable. When Memtable reaches a certain size it is written in its entirety to disk as a new SSTable. SStable has an index which consists of sorted keys, which point to the location in file that has the columns. SSTables are immutable. They are never updated.<br />
<br />
The high throughput for writes is achieved by always appending and never seeking before writing . Updates to existing keys are also written to the current Memtable and eventually written to a new SStable. There are no disk seeks while writing.<br />
<br />
Obviously, over time there are going to be several SSTables on disk. Not only that, but the latest column values for a single key might be spread over several SSTables. <br />
<br />
<u><b>How does this affect reads ?</b></u> <br />
<br />
Reading from one SSTable is easy. Find the key in the index. Keys are sorted. So a binary search would find the key. After that it is one disk seek to the location of the columns.<br />
<br />
But as pointed out earlier, the updates for a single key might be spread over several SSTables. So for the latest values, Cassandra would need to read several SSTables and merge updates based on timestamps before returning columns.<br />
<br />
Rather than do this for every read, it is worthwhile to merge SSTables in the background, so that when a read request arrives, Cassandra needs to just read from fewer SSTables ( one would be ideal).<br />
<br />
<u><b>Compaction</b></u><br />
<br />
Compaction is the process of merging SSTables in order to<br />
<ul style="text-align: left;">
<li>read columns for partition key from as few SSTables as possible</li>
<li>remove deleted data</li>
<li>reduce fragmentation</li>
</ul>
We did not talk about delete earlier. When Cassandra receives a request to delete a partition key, it merely marks it for deletion but does not actually remove the data associated with the key. The term used in Cassandra is "tombstone". A tombstone is created. During compaction, tombstones are supposed to be removed. <br />
<br />
<u><b>Types of Compaction</b></u><br />
<br />
<u><i>Size tiered compaction:</i></u><br />
<br />
This is based on number of SSTables and size of table. A compaction is triggered when the number tables and their size reaches a certain threshhold. Tables of similar size are grouped into buckets for compaction. Smaller tables are merged into a larger table.<br />
<br />
Some disadvantages of size tiered compaction are that read performance can vary because the columns for a partition key can be spread over several SSTables. A lot for free space ( double the current storage) is required during compaction, since the merge process is making a copy. <br />
<br />
<u><i>Leveled compaction:</i></u><br />
<br />
There are multiple levels of SSTables. SSTables within a level are of the same size and non overlapping (Within each level, a partition key will be in one SSTable only) . SSTables in the higher levels are larger. Data from the lower levels is merged into SSTables of the higher levels. <br />
Leveled compaction tries to ensure that most reads happen from 1 SSTable. The worst read performance is bound by the number of levels. This works well for read heavy workloads because Cassandra knows which SSTable within each level to check for the key. But more work needs to be done during compaction especially for write(insert) heavy workloads. Due to the extra work to ensure a fixed number of SSTables, there is a lot more IO.<br />
<br />
<u><i>Data tiered compaction:</i></u><br />
<br />
Data written within a certain period of time say 1 hr is merged in one SSTable. This works well when you are writing time series data and querying based on timestamp. A query such as give me columns written in the last 1 hr can be serviced by reading just 1 SSTable. This also makes it easy to remove tombstones that are based on TTL. Data with the same TTL is likely to be in the same SSTable and the entire SSTable can be dropped.<br />
<br />
<u><i>Manual compaction: </i></u><br />
<br />
This is compaction started manually using the nodetool compact command. A keyspace and table are specified. If you do not specify the table, the compaction will run on all tables. This is called a major compaction. It involves a lot of IO and is generally not done.<br />
<br />
In summary, compaction is really fundamental to distributed databases like Cassandra. Without the append only architecture, write throughput would be much lower. And high write through put is necessary for high scalable systems or stated in another way - writes are much harder to scale and are generally the bottleneck. Read can be scaled easily by de-normalization , replication and caching. <br />
<br />
Even with relational databases, applications do not go to Oracle or MySql for every read. Typically there is cache like Memcached or Redis, that caches frequently read data. For predictable read performance consider fronting Cassandra with a fast cache. Another strategy is to use different Cassandra clusters for different workloads. Read requests can be sent to clusters optimized for read.<br />
<br />
Lastly , Leveled compaction works better for read intensive loads where as Data tiered compaction is suited for time series data and when the there is steady write rate. Size tiered compaction is used with write intensive workloads. But there is no silver bullet. You have to try, measure and tune for optimal performance with your workload.<br />
<u><b><br /></b></u>
<br />
<u><b>Related Blogs:</b></u><br />
<br />
<a href="http://khangaonkar.blogspot.com/2013/09/cassandra-vs-hbase-which-nosql-store-do.html">Cassandra vs HBase</a><br />
<a href="http://khangaonkar.blogspot.com/2013/10/apache-cassandra-data-model.html">Cassandra data model</a><br />
<a href="http://khangaonkar.blogspot.com/2014/06/apache-cassandra-things-to-consider.html">Choosing Cassandra </a><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-46927503704611982062015-03-28T09:45:00.000-07:002015-03-28T09:45:13.787-07:00Apache Kafka : New producer API in 0.8.2<div dir="ltr" style="text-align: left;" trbidi="on">
In Kafka version 0.8.2, there is a newer, better and faster version of the Producer API. You might recall from earlier blogs that the Producer is used to send messages to a topic. If you are new to Kafka, please read following blogs first.<br />
<br />
<a href="http://khangaonkar.blogspot.com/2014/04/apache-kafka-introduction-should-i-use.html">Apache Kafka Introduction</a> <br />
<a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">Apache Kafka JAVA tutorial #1 </a><br />
<br />
Some features of the new producer are :<br />
<ul style="text-align: left;">
<li>Asynchronously send messages to a topic.</li>
<li>Send returns immediately. Producer buffers messages and sends them to broker in the background.</li>
<li>Thanks to buffering, many messages sent to broker at one time without waiting for responses.</li>
<li>Send method returns a Future<RecordMetadata<span style="background-color: white;">></span><recordmetadata>. RecordMetadata has information on the record like which partition it stored in and what the offset is.</recordmetadata></li>
<li>Caller may optionally provide a callback, which gets called when the message is acknowledged.</li>
<li>Buffer can at times fill up. Buffer size is configurable and can be configured using the total.memory.bytes configuration property.</li>
<li>If the buffer fills up, the Producer can either block or throw an exception. The behavior is controlled by the block.on.buffer.full configuration property.</li>
</ul>
In the rest of the blog we will use Producer API to rewrite the Producer we wrote in <a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">tutorial #1</a><br />
<br />
For this example, you will need the following<br />
<br />
For this tutorial you will need<br />
<br />
(1) <a href="http://kafka.apache.org/">Apache Kafka 0.8.2</a><br />
(2) JDK 7 or higher. An IDE of your choice is optional<br />
(3) Apache Maven<br />
(4) Source code for this sample from https://github.com/mdkhanga/my-blog-code so you can look at working code.<br />
<br />
In this tutorial we take the Producer we wrote in Step 5 Kafka <a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">tutorial 1</a> and rewrite it using the new API. We will send messages to a topic on a Kafka Cluster and consume it with the consumer we wrote in that tutorial.<br />
<br />
<b>Step 1: Step up a Kafka cluster and create a topic</b><br />
<br />
If you are new to Kafka, you can read and follow the instructions in my tutorial 1 to setup a cluster and create a topic.<br />
<br />
<b>Step 2: Get the source code for tutorial 1,2,3 from https://github.com/mdkhanga/my-blog-code</b><br />
<br />
Copy KafkaProducer.java to KafkaProducer082.java. We will port KafkaProducer082 to the new producer API.<br />
<br />
<b>Step 3: Write the new Producer</b><br />
<br />
Update the maven dependencies in pom.xml.<br />
<br />
For the new producer you will need<br />
<br />
<span style="color: blue;"><span style="background-color: white;"><dependency></span></span><br />
<span style="color: blue;"><span style="background-color: white;"> <groupId>org.apache.kafka</groupId></span></span><br />
<span style="color: blue;"><span style="background-color: white;"> <artifactId>kafka-clients</artifactId></span></span><br />
<span style="color: blue;"><span style="background-color: white;"> <version>0.8.2.0</version></span></span><br />
<span style="color: blue;"><span style="background-color: white;"> </dependency></span></span><br />
<br />
The rest of the client code also needs to be updated to 0.8.2.<br />
<br />
<span style="color: blue;"><dependency></span><br />
<span style="color: blue;"> <groupId>org.apache.kafka</groupId></span><br />
<span style="color: blue;"> <artifactId>kafka_2.10</artifactId></span><br />
<span style="color: blue;"> <version>0.8.2.0</version></span><br />
<span style="color: blue;"></dependency></span><br />
<br />
The new producer will not work if rest of the client uses 0.8.1 or lower versions.<br />
<br />
<b>Step 3.1: Imports</b><br />
<br />
Remove the old imports and add these.<b><br /></b><br />
<br />
<span style="color: purple;"><span style="background-color: white;">import org.apache.kafka.clients.producer.KafkaProducer ;</span></span><br />
<span style="color: purple;"><span style="background-color: white;">import org.apache.kafka.clients.producer.ProducerRecord;</span></span><br />
<br />
Note the packages.<br />
<br />
<b>Step 3.2: Create the producer</b><br />
<br />
<span style="color: purple;">Properties props = new Properties();</span><br />
<span style="color: purple;">props.put("bootstrap.servers", "localhost:9092");</span><br />
<span style="color: purple;">props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");</span><br />
<span style="color: purple;">props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");</span><br />
<span style="color: purple;">props.put("request.required.acks", "1");</span><br />
<span style="color: purple;"><br /></span>
<span style="color: purple;">KafkaProducer<string string=""> producer = new KafkaProducer<string string="">(props);</string></string></span><br />
<br />
As in the past, you provide some configuration like which broker to connect to as Properties. The key and value serializers have to be provided. There are no default values.<br />
<br />
<b>Step 3.3: Send Messages</b><br />
<br />
<span style="color: purple;">String date = "04092014" ;</span><br />
<span style="color: purple;">String topic = "mjtopic" ;</span><br />
<span style="color: purple;"> </span><br />
<span style="color: purple;">for (int i = 1 ; i <= 1000000 ; i++) {</span><br />
<span style="color: purple;"> </span><br />
<span style="color: purple;"> String msg = date + " This is message " + i ;</span><br />
<span style="color: purple;"> ProducerRecord<string string=""> data = new ProducerRecord<string string="">(topic, </string></string></span><br />
<span style="color: purple;"> String.valueOf(i), msg);<br /> <br /> Future<recordmetadata> rs = producer.send(data, new Callback() {<br /> @Override<br /> public void onCompletion(RecordMetadata recordMetadata, Exception e) {<br /><br /> System.out.println("Received ack for partition=" + recordMetadata.partition() +</recordmetadata></span><br />
<span style="color: purple;"> " offset = " + recordMetadata.offset()) ;<br /> }<br /> });<br /><br /> try {<br /> RecordMetadata rm = rs.get();<br /> msg = msg + " partition = " + rm.partition() + " offset =" + rm.offset() ;<br /> System.out.println(msg) ;<br /> } catch(Exception e) {<br /> System.out.println(e) ;<br /> }</span><br />
<span style="color: purple;"> </span><br />
<span style="color: purple;">}</span><br />
<br />
As mentioned earlier. The send is async and it will batch messages before sending to the broker. The send method immediately returns a Future<recordmetadata> that has the partition and offset in the partition for message send. We provide a callback to the send method whose onCompletion method is called when an acknowledgement for the message is received.</recordmetadata><br />
<br />
<b>Step 4: Start the Consumer</b><br />
<br />
<span style="color: blue;">mvn exec:java -Dexec.mainClass="com.mj.KafkaConsumer" </span><br />
<br />
<b>Step 5: Start the Producer</b><br />
<br />
<span style="color: blue;"><span style="background-color: white;">mvn exec:java -Dexec.mainClass="com.mj.KafkaProducer082" </span></span><br />
<br />
<br />
You should start seeing messages in the consumer.<br />
<br />
In summary, the new producer API is asynchronous, scalable and returns useful metadata on the message sent.<br />
<br />
<br />
Related Blogs:<br />
<a href="http://khangaonkar.blogspot.com/2014/04/apache-kafka-introduction-should-i-use.html">Apache Kafka Introduction</a> <br />
<a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">Apache Kafka JAVA tutorial #1 </a><br />
<a href="http://khangaonkar.blogspot.com/2014/11/apache-kafka-java-tutorial-2.html">Apache Kafka JAVA tutorial #2 </a><br />
<a href="http://khangaonkar.blogspot.com/2015/01/apache-kafka-java-tutorial-3-once-and.html">Apache Kafka JAVA tutorial #3 </a><br />
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-23108426697820492812015-01-23T17:49:00.000-08:002015-01-23T17:49:02.647-08:00MongoDB tutorial #1 : Introduction<div dir="ltr" style="text-align: left;" trbidi="on">
In the blog <a href="http://khangaonkar.blogspot.com/2011/11/what-is-nosql.html">NoSQL</a>, I provided an introduction to NoSql databases. We have discussed some NoSql databases such as <a href="http://khangaonkar.blogspot.com/2013/03/using-hbase.html">HBase</a>, <a href="http://khangaonkar.blogspot.com/2013/10/apache-cassandra-data-model.html">Cassandra</a> , <a href="http://khangaonkar.blogspot.com/2013/07/redis-fast-key-value-store.html">Redis</a>. In this blog, we discuss MongoDB, a document oriented database, which is in contrast to the key value stores we discussed earlier. MongoDB is currently one of the more popular NoSql databases, primarily due to its ease of use and simpler programming model. But there have been reports that it lags in scalability or performance compared to other NoSql databases. And it has more moving parts. But its ease of use and low learning curve makes it an attractive choice in many scenarios.<br />
<br />
The key features of MongoDB are:<br />
<ul style="text-align: left;">
<li>The unit of storage like a record in relational databases or key-value pair in key value stores, is a document or more precisely a JSON document. </li>
<ul>
<li>{ "employee_id":"12345", </li>
<li> "name":"John doe",</li>
<li> "department": "database team",</li>
<li> "title":"architect",</li>
<li> "start_date":"1/1/2015" }</li>
</ul>
<li>Documents are stored in collections.</li>
<li>Collection can be indexed by field. </li>
<li>Indexing support for faster queries.</li>
<li>No schema is required for the collection.</li>
<li>MongoDB is highly available using replication and automatic failover. Write happens to a primary server but can be replicated to multiple replicas. If the primary goes down, one of the replicas takes over as the primary.</li>
<li>Read operations can be scaled by sending the reads to the replicas as well.</li>
<li>Write operations are scaled by sharding.</li>
<li>Sharding is automatic.But has a couple of moving parts</li>
<ul>
<li>Sharding is based on a key which is an indexed field or a indexed compound field.</li>
<li>Sharding can be range based or hash based. With range based, partitioning is based on key range, so that values close to each other are together. With Hash based, the partioning is based on a hash of the key.</li>
<li>Data set is divided into chunks. Each shard manages some chunks</li>
<li>Query routers are used to send the request to the right shard.</li>
<li>Config servers hold meta data on which chunks are with which shard.</li>
<li>If a chunk grows too large, it is broken up. If some shards own more chunks than others, the cluster is automatically rebalanced by redistributing the chunks.</li>
</ul>
</ul>
In the rest of the blog, let us fire up a mongodb instance, create some data and learn how to query it.<br />
<br />
<b>Step 1: Download Mongo</b><br />
<br />
You can download the server from <a href="http://www.mongodb.org/downloads">www.mongodb.org/downloads</a>.<br />
I like to download the generic linux version and untar it.<br />
<br />
Untar/unzip it to a directory of your choice. <br />
<br />
<b>Step 2 : Start the server</b><br />
<br />
Decide on a directory to store the data. Say ~/mongodata. Create the directory.<br />
<br />
Change to the directory where you installed mongo. To start the server, type the command.<br />
<br />
bin/mongod -dbpath ~/mongodata<br />
<br />
<b>Step 3: Start the mongo client</b><br />
<br />
bin/mongo<br />
<br />
<b>Step 4: Create and insert some data into a collection</b><br />
<br />
Create and use a database.<br />
> use testDb ;<br />
<br />
Create a employee document and insert into the employees collection.<br />
> emp1 = { "employee_id":"12345", "name":"John doe", "department": "database team", "title":"architect", "start_date":"1/1/2015" }<br />
> db.employees.insert(emp1)<br />
<br />
Retrieve the document. <br />
> db.employees.find()<br />
{ "_id" : ObjectId("54c2de34426d3d4ea1226498"), "employee_id" : "12345", "name" : "John doe", "department" : "database team", "title" : "architect", "start_date" : "1/1/2015" } <br />
<br />
<b>Step 5 : Insert a few more employees</b><br />
<br />
> emp2 = { "employee_id":"12346", "name":"Ste Curr", "department":
"database team", "title":"developer1", "start_date":"12/1/2013" }<br />
> db.employees.insert(emp2)<br />
<br />
> emp3 = { "employee_id":"12347", "name":"Dre Grin", "department":
"QA team", "title":"developer2", "start_date":"12/1/2011" }<br />
> db.employees.insert(emp3)<br />
<br />
> emp4 = { "employee_id":"12348", "name":"Daev Eel", "department":
"Build team", "title":"developer3", "start_date":"12/1/2010" }<br />
> db.employees.insert(emp4)<br />
<br />
<b>Step 6: Queries</b><br />
<br />
Query by attribute equality<br />
> db.employees.find({"name" : "Ste Curr"} )<br />{ "_id" : ObjectId("54c2e0de426d3d4ea1226499"), "employee_id" : "12346", "name" : "Ste Curr", "department" : "database team", "title" : "developer1", "start_date" : "12/1/2013" }<br />
<br />
Query by attribute with regex condition<br />
> db.employees.find({"department":{$regex : "data*"}})<br />{ "_id" : ObjectId("54c2de34426d3d4ea1226498"), "employee_id" : "12345", "name" : "John doe", "department" : "database team", "title" : "architect", "start_date" : "1/1/2015" }<br />{ "_id" : ObjectId("54c2e0de426d3d4ea1226499"), "employee_id" : "12346", "name" : "Ste Curr", "department" : "database team", "title" : "developer1", "start_date" : "12/1/2013" }<br />
<br />
Query using less than , greater than conditions<br />
> db.employees.find({"employee_id":{$gte : "12347"}})<br />{ "_id" : ObjectId("54c2e382426d3d4ea122649a"), "employee_id" : "12347", "name" : "Dre Grin", "department" : "QA team", "title" : "developer2", "start_date" : "12/1/2011" }<br />{ "_id" : ObjectId("54c2e3af426d3d4ea122649b"), "employee_id" : "12348", "name" : "Daev Eel", "department" : "Build team", "title" : "developer3", "start_date" : "12/1/2010" }<br />
<br />
> db.employees.find({"employee_id":{$lte : "12346"}})<br />{ "_id" : ObjectId("54c2de34426d3d4ea1226498"), "employee_id" : "12345", "name" : "John doe", "department" : "database team", "title" : "architect", "start_date" : "1/1/2015" }<br />{ "_id" : ObjectId("54c2e0de426d3d4ea1226499"), "employee_id" : "12346", "name" : "Ste Curr", "department" : "database team", "title" : "developer1", "start_date" : "12/1/2013" }<br />
<br />
<b>Step 7: Cursors</b><br />
<br />
Iterate through results. <br />
> var techguys = db.employees.find()<br />> while ( techguys.hasNext() ) printjson( techguys.next() )<br />{<br /> "_id" : ObjectId("54c2de34426d3d4ea1226498"),<br /> "employee_id" : "12345",<br /> "name" : "John doe",<br /> "department" : "database team",<br /> "title" : "architect",<br /> "start_date" : "1/1/2015"<br />}<br />
.<br />
.<br />
.<br />
<br />
<b>Step 8: Delete records</b><br />
<br />
Delete one record <br />
> db.employees.remove({"employee_id" : "12345"})<br />WriteResult({ "nRemoved" : 1 })<br />
<br />
Delete all records <br />
> db.employees.remove({})<br />WriteResult({ "nRemoved" : 3 })<br />
<br />
As you can see MongoDb is pretty easy to use. Download and give it a try. <br />
<br />
<br />
<br />
<br />
<ul style="text-align: left;">
</ul>
</div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-24957102094370416292015-01-09T17:45:00.001-08:002015-03-28T10:14:08.765-07:00Apache Kafka JAVA tutorial #3: Once and only once delivery<div dir="ltr" style="text-align: left;" trbidi="on">
In <a href="http://khangaonkar.blogspot.com/2014/04/apache-kafka-introduction-should-i-use.html">Apache Kafka introduction</a>, I provided an architectural overview on the internet scale messaging broker. In <a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">JAVA tutorial 1</a>, we learnt how to send and receive messages using the high level consumer API. In <a href="http://khangaonkar.blogspot.com/2014/11/apache-kafka-java-tutorial-2.html">JAVA tutorial 2</a>, We examined partition leaders and metadata using the lower level Simple consumer API.<br />
<br />
A key requirement of many real world messaging applications is that a message should be delivered once and only once to a consumer. If you have used the traditional JMS based message brokers, this is generally supported out of the box, with no additional work from the application programmer. But Kafka has distributed architecture where the messages to a topic are partitioned for scalability and replicated for fault tolerance and hence the application programmer has to do a little more to ensure once and only once delivery.<br />
<br />
Some key features of the Simple Consumer API are:<br />
<ul style="text-align: left;">
<li>To fetch a message, you need to know the partition and partition leader.</li>
<li>You can read messages in the partition several times.</li>
<li>You can read from the first message in the partition or from a known offset.</li>
<li>With each read, you are returned an offset where the next read can happen.</li>
<li>You can implement once and only once read, by storing the offsets with the message that was just read, thereby making the read transactional. In the event of a crash, you can recover because you know what message was last read and where the next one should be read.</li>
<li>Not covered in this tutorial, but the API lets you determine how many partitions there are for a topic and who the leader for each partition is. While fetching message, you connect to the leader. Should a leader go down, you need to fail over by determining who the new leader is, connect to it and continue consuming messages</li>
</ul>
For this tutorial you will need<br />
<br />
(1) <a href="http://kafka.apache.org/">Apache Kafka 0.8.1</a><br />
(2) <a href="http://zookeeper.apache.org/">Apache Zookeeper </a><br />
(3) JDK 7 or higher. An IDE of your choice is optional<br />
(4) Apache Maven<br />
(5) Source code for this sample from <a href="https://github.com/mdkhanga/my-blog-code">https://github.com/mdkhanga/my-blog-code</a> if you want to look at working code<a href="https://sites.google.com/site/khangaonkar/home/kafka-sample"> </a><br />
<br />
In this tutorial, we will<br />
(1) start a Kafka broker<br />
(2) create a topic with 1 partition<br />
(3) Send a messages to the topic<br />
(4) Write a consumer using Simple API to fetch messages. <br />
(5) Crash the consumer and restart it ( several times). Each time you will see that it reads the next message after the last one that was read.<br />
<br />
Since we are focusing of reading messages from a particular offset in a partition, we will keep other things simple by limiting ourselves to 1 broker and 1 partition.<br />
<br />
<b>Step 1: Start the broker</b><br />
<br />
<br />
<span style="color: purple;">bin/kafka-server-start.sh config/server1.properties</span><br />
<br />
For the purposes of this tutorial, one broker is sufficient as we are reading from just one partition.<br />
<br />
<b>Step 2: Create the topic</b><br />
<br />
<span style="color: purple;">bin/kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1 --topic atopic </span><br />
<br />
Again for the purposes of this tutorial we just need 1 partition.<br />
<br />
<b>Step 3: Send messages to the topic</b><br />
<br />
Run the producer we wrote in <a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">tutorial 1</a> to send say 1000 messages to this topic. <br />
<br />
<b>Step 4: Write a consumer using SimpleConsumer API</b><br />
<br />
The complete code is in the file KafkaOnceAndOnlyOnceRead.java. <br />
<br />
Create a file to store the next read offset. <br />
<br />
<span style="color: blue;">static {<br /> try {<br /> readoffset = new RandomAccessFile("readoffset", "rw");<br /> } catch (Exception e) {<br /> System.out.println(e);<br /> }</span><br />
<span style="color: blue;">} </span><br />
<br />
Create a SimpleConsumer.<br />
<br />
<span style="color: blue;">SimpleConsumer consumer = new SimpleConsumer("localhost", 9092, 100000, 64 * 1024, clientname);</span><br />
<br />
If there is a offset stored in the file, we will read from the offset. Otherwise, we read from the beginining of the partition -- EarliestTime.<br />
<br />
<span style="color: blue;"> long offset_in_partition = 0 ;<br /> try {<br /> offset_in_partition = readoffset.readLong();<br /> } catch(EOFException ef) {<br /> offset_in_partition = getOffset(consumer,topic,partition,kafka.api.OffsetRequest.EarliestTime(),clientname) ;<br /> }</span><br />
<br />
The rest of the code is in a<br />
<br />
<span style="color: blue;">while (true) {</span><br />
<span style="color: blue;"><br /></span>
<span style="color: blue;">}</span><br />
<br />
loop. We will keep reading messages or sleep if there are none. <br />
<br />
Within the loop, we create a request and fetch messages from the offset.<br />
<br />
<span style="color: blue;">FetchRequest req = new FetchRequestBuilder()<br /> .clientId(clientname)<br /> .addFetch(topic, partition, offset_in_partition, 100000).build();<br />FetchResponse fetchResponse = consumer.fetch(req);</span><br />
<br />
Read messages from the response.<br />
<br />
<span style="color: blue;">for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(topic, partition)) {<br /> long currentOffset = messageAndOffset.offset();<br /> if (currentOffset < offset_in_partition) {<br /> continue;<br /> }<br /> offset_in_partition = messageAndOffset.nextOffset();<br /> ByteBuffer payload = messageAndOffset.message().payload();<br /><br /> byte[] bytes = new byte[payload.limit()];<br /> payload.get(bytes);<br /> System.out.println(String.valueOf(messageAndOffset.offset()) + ": " + new String(bytes, "UTF-8"));<br /> readoffset.seek(0);<br /> readoffset.writeLong(offset_in_partition);<br /> numRead++;<br /> messages++ ;<br /><br /> if (messages == 10) {<br /> System.out.println("Pretend a crash happened") ;<br /> System.exit(0);<br /> }<br /> }</span><br />
<br />
For each message that we read, we check that the offset is not less than the one we want to read from. If it is, we ignore the message. For efficiency, Kafka batches messages. So you can get messages already read. For each valid message, we print it and write the next read offset to the file. If the consumer were to crash, when restarted, it would start reading from the last saved offset.<br />
<br />
For demo purposes, the code exits after 10 messages. If you run this program several times, you will see that it starts reading exactly from where it last stopped. You can change that value and experiment.<br />
<br />
<b>Step 5: Run the consumer several times.</b><br />
<br />
<span style="color: purple;">mvn exec:java -Dexec.mainClass="com.mj.KafkaOnceAndOnlyOnceRead"</span><br />
<br />
<span style="color: blue;">210: 04092014 This is message 211<br />211: 04092014 This is message 212<br />212: 04092014 This is message 213<br />213: 04092014 This is message 214<br />214: 04092014 This is message 215<br />215: 04092014 This is message 216<br />216: 04092014 This is message 217<br />217: 04092014 This is message 218<br />218: 04092014 This is message 219<br />219: 04092014 This is message 220</span><br />
<br />
run it again <br />
<br />
<span style="color: purple;">mvn exec:java -Dexec.mainClass="com.mj.KafkaOnceAndOnlyOnceRead"</span><br />
<br />
<span style="color: blue;">220: 04092014 This is message 221<br />221: 04092014 This is message 222<br />222: 04092014 This is message 223<br />223: 04092014 This is message 224<br />224: 04092014 This is message 225<br />225: 04092014 This is message 226<br />226: 04092014 This is message 227<br />227: 04092014 This is message 228<br />228: 04092014 This is message 229<br />229: 04092014 This is message 230</span><br />
<br />
In Summary, it is possible to implement one and only once delivery of messages in Kafka by storing the read offset.<br />
<br />
Related Blogs:<br />
<br />
<a href="http://khangaonkar.blogspot.com/2014/04/apache-kafka-introduction-should-i-use.html">Apache Kafka Introduction</a> <br />
<a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">Apache Kafka JAVA tutorial #1 </a><br />
<a href="http://khangaonkar.blogspot.com/2014/11/apache-kafka-java-tutorial-2.html">Apache Kafka JAVA tutorial #2 </a><br />
<a href="http://khangaonkar.blogspot.com/2015/03/apache-kafka-new-producer-api-in-082.html">Apache Kafka 0.8.2 New Producer API </a></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-8324905663941028702014-11-20T18:04:00.000-08:002015-03-28T10:11:06.952-07:00Apache Kafka Java tutorial #2<div dir="ltr" style="text-align: left;" trbidi="on">
In the blog <a href="http://khangaonkar.blogspot.com/2014/04/apache-kafka-introduction-should-i-use.html">Kafka introduction</a>, I provided an overview of the features of Apache Kafka, an internet scale messaging broker. In <a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">Kafka tutorial #1</a>, I provide a simple java programming example for sending and receiving messages using the high level consumer API. Kafka also provides a Simple consumer API that provides greater control to the programmer for reading messages and partitions. Simple is a misnomer and this is a complicated API. SimpleConsumer connects directly to the leader of a partition and is able to fetch messages from an offset. Knowing the leader for a partition is a preliminary step for this. And if the leader goes down, you can recover and connect to the new leader.<br />
<br />
In the tutorial, we will use the "Simple" API to find the lead broker for a topic partition.<br />
<br />
To recap some Kafka concepts<br />
<ul style="text-align: left;">
<li>Broker in Kafka is a cluster of brokers</li>
<li>Messages are sent to and received from topics</li>
<li>Topics are partitioned across brokers</li>
<li>For each partition there is 1 leader broker and 1 or more replicas</li>
<li>Ordering of messages is maintained only within a partition</li>
</ul>
To manage read positions within a topic, it has to be done at partition level and You need to know the leader for that partition.<br />
<br />
For this tutorial you will need<br />
<br />
(1) <a href="http://kafka.apache.org/">Apache Kafka 0.8.1</a><br />
(2) <a href="http://zookeeper.apache.org/">Apache Zookeeper </a><br />
(3) JDK 7 or higher. An IDE of your choice is optional<br />
(4) Apache Maven<br />
(5) Source code for this sample from https://github.com/mdkhanga/my-blog-code if you want to look at working code<a href="https://sites.google.com/site/khangaonkar/home/kafka-sample"> </a><br />
<br />
In this tutorial, we will<br />
(1) create a 3 node kafka cluster<br />
(2) create a topic with 12 partitions<br />
(3) Write code to determine the leader of the partition<br />
(4) Run the code to determine the leaders of each partition. <br />
(5) Kill one broker and run again to determine the new leaders<br />
<br />
Note that Kafka-topics --describe command lets you do the same. But we are doing it programatically for the sake of learning and because it is useful is some usecases. <br />
<br />
<b>Step 1 : Create a cluster</b><br />
<br />
Follow the instruction is <a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">tutorial 1</a> to create a 3 node cluster.<br />
<br />
<b>Step 2 : Create a topic with 12 partitions</b><br />
<br />
<span style="background-color: white;"><span style="color: blue;">/usr/local/kafka/bin$ kafka-topics.sh --create --zookeeper host1:2181 --replication-factor 2 --partitions 12 --topic mjtopic</span></span><br />
<br />
<b><span style="background-color: white;"><span style="color: blue;"><span style="color: black;">Step 3 : Write code to determine the leader for each partition</span></span></span></b><br />
<br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;">We use the SimpleConsumer API.</span></span></span><br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;">PartitionLeader.java </span></span></span><br />
<br />
<span style="color: blue;"><span style="background-color: white;">import kafka.javaapi.PartitionMetadata;<br />import kafka.javaapi.TopicMetadata;<br />import kafka.javaapi.TopicMetadataRequest;<br />import kafka.javaapi.consumer.SimpleConsumer; </span></span><br />
<br />
<span style="color: blue;"><span style="background-color: white;">SimpleConsumer consumer = new SimpleConsumer("localhost", 9092, </span></span><br />
<span style="color: blue;"><span style="background-color: white;"> 100000, 64 * 1024, "leaderLookup");</span></span><br />
<span style="color: blue;"><span style="background-color: white;">List<string> topics = Collections.singletonList("mjtopic");</string></span></span><br />
<span style="color: blue;"><span style="background-color: white;">TopicMetadataRequest req = new TopicMetadataRequest(topics);<br />kafka.javaapi.TopicMetadataResponse resp = consumer.send(req);</span></span><br />
<span style="color: blue;"><span style="background-color: white;">List<topicmetadata> metaData = resp.topicsMetadata();<br />int[] leaders = new int[12] ;</topicmetadata></span></span><br />
<span style="color: blue;"><span style="background-color: white;"> for (TopicMetadata item : metaData) {<br /> for (PartitionMetadata part : item.partitionsMetadata()) {<br /> leaders[part.partitionId()] = part.leader().id() ;<br /> }<br /> }<br />for (int j = 0 ; j < 12 ; j++) {<br /> System.out.println("Leader for partition " + j + " " + leaders[j]) ;</span></span><br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;"><span style="color: blue;">} </span></span></span></span><br />
<br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;">SimpleConsumer can connect to any broker that is online. We construct a TopicMetadataRequest with the topic we are interested in and send it to broker with the consumer.send call. A TopicMetaData is returned which contains a set of PartitionMetaData ( one for each partition ). Each PartitionMetaData has the leader and replicas for that partition.</span></span></span><br />
<br />
<b><span style="background-color: white;"><span style="color: blue;"><span style="color: black;">Step 4 : Run the code </span></span></span></b><br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;"><br /></span></span></span>
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;">Leader for partition 0 1<br />Leader for partition 1 2<br />Leader for partition 2 3<br />Leader for partition 3 1<br />Leader for partition 4 2<br />Leader for partition 5 3<br />Leader for partition 6 1<br />Leader for partition 7 2<br />Leader for partition 8 3<br />Leader for partition 9 1<br />Leader for partition 10 2<br />Leader for partition 11 3</span></span></span><br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;"><br /></span></span></span>
<b><span style="background-color: white;"><span style="color: blue;"><span style="color: black;">Step 5 : Kill node 3 and run the code again</span></span></span></b><br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;"><br /></span></span></span>
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;">Leader for partition 0 1<br />Leader for partition 1 2<br />Leader for partition 2 1<br />Leader for partition 3 1<br />Leader for partition 4 2<br />Leader for partition 5 1<br />Leader for partition 6 1<br />Leader for partition 7 2<br />Leader for partition 8 1<br />Leader for partition 9 1<br />Leader for partition 10 2<br />Leader for partition 11 1</span></span></span><br />
<br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;">You can see the broker 1 has assumed leadership for broker 3's partitions. </span></span></span><br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;"><br /></span></span></span>
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;">In summary, one of the things you can use the SimpleConsumer API is to examine topic partition metadata. We will use this code in future tutorials to determine the leader of a partition.</span></span></span><br />
<br />
<span style="background-color: white;"><span style="color: blue;"><span style="color: black;">Related blogs:</span></span></span><br />
<br />
<a href="http://khangaonkar.blogspot.com/2014/04/apache-kafka-introduction-should-i-use.html">Apache Kafka Introduction</a> <br />
<a href="http://khangaonkar.blogspot.com/2014/05/apache-kafka-java-tutorial.html">Apache Kafka JAVA tutorial #1 </a><br />
<a href="http://khangaonkar.blogspot.com/2015/01/apache-kafka-java-tutorial-3-once-and.html">Apache Kafka JAVA tutorial #3 </a><br />
<a href="http://khangaonkar.blogspot.com/2015/03/apache-kafka-new-producer-api-in-082.html">Apache Kafka 0.8.2 New Producer API </a><br />
<br />
<br />
<br />
<br />
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-68218952412593222542014-10-10T18:52:00.000-07:002014-10-10T18:52:47.934-07:00ServletContainerInitializer : Discovering classes in your Web Application<div dir="ltr" style="text-align: left;" trbidi="on">
In my <a href="http://khangaonkar.blogspot.com/2014/09/discovering-third-party-apispi.html">blog </a>on java.util.ServiceLoader, we discussed how it can be used to discover third party implementations of your interfaces. This can be useful if your application is a container that executes code written by developers. In this blog, we discuss dynamic discovery and registration for Servlets.<br />
<br />
All Java Web developers are already familiar with javax.servlet.ServletContextListerner interface. If you want to do initialization when the application starts or clean up when it is destroyed, you implement the contextInitialized and contextDestroyed methods of this interface.<br />
<br />
In Servlet 3.0 specification, they added a couple interesting features that help with dynamicity, that are particularly useful to developers of libraries or containers.<br />
<br />
(1) <a href="http://docs.oracle.com/javaee/7/api/javax/servlet/ServletContainerInitializer.html">javax.servlet.ServletContainerInitializer</a> is another interface that can notify your code of application start.<br />
<br />
Library or container developers typically provide an implementation of this interface. The implementation should be annotated with the <a href="http://docs.oracle.com/javaee/7/api/javax/servlet/annotation/HandlesTypes.html">HandlesTypes</a> annotation. When the application starts, the Servlet container calls the OnStart method of this interface, passing in as a parameter a set of all classes that implement, extend or are annotated with the type(s) declared in the HandlesTypes annotation.<br />
<br />
(2) The specification also add a number of methods to dynamically register Servlets, filters and listeners. You will recall that previously, if you needed to add a new Servlet to you application, you needed to modify web.xml.<br />
<br />
Combining (1) and (2), it should be possible to dynamically discover and add Servlets to a web application. This is a powerful feature that allows you to make the web application modular and spread development across teams without build dependencies. Note that this technique can be used to discover any interface, class or annotation. I am killing 2 birds with one stone by using this to discover servlets.<br />
<br />
In the rest of the blog, we will build a simple web app, that illustrates the above concepts. For this tutorial you will need<br />
<br />
<span style="color: blue;"><span style="background-color: white;">(1) JDK 7.x or higher</span></span><br />
<span style="color: blue;"><span style="background-color: white;">(2) Apache Tomcat or any Servlet container</span></span><br />
<span style="color: blue;"><span style="background-color: white;">(3) Apache Maven</span></span><br />
<br />
In this example we will<br />
<br />
(1) We will implement SevletContainerInitializer called WebContainerInitializer and package it in a jar containerlib.jar.<br />
(2) To make the example interesting, we will create a new annotation @MyServlet, which will act like the @WebServlet annotation in the servlet specification. WebContainerInitializer will handle types that are annotated with @MyServlet.<br />
(3) We will write a simple web app that has a Servlet annotated with @MyServlet and has containerlib.jar in the lib directory. No entries in web.xml.<br />
(4) When the app starts, the servlet is discovered and registered. You can go to a browser and invoke it.<br />
<br />
Before we proceed any further, you may download the code from my <a href="https://github.com/mdkhanga/my-blog-code">github respository,</a> So you can look at the code as I explain. The code for this example is in the dynamicservlets directory.<br />
<br />
<u><b>Step 0: Get the code</b></u><br />
<br />
<i>git clone https://github.com/mdkhanga/my-blog-code.git</i><br />
<br />
dynamicservlets has 2 subdirectories: containerlib and dynamichello.<br />
<br />
The containerlib project has the MyServlet annotation and the WebContainerInitializer which implements ServletContainerInitializer.<br />
<br />
DynamicHello is a web application that uses containerlib jar.<br />
<br />
<u><b>Step 1: The MyServlet annotation</b></u><br />
MyServlet.java<br />
<span style="color: #134f5c;">@Retention(RetentionPolicy.RUNTIME)</span><br />
<span style="color: #134f5c;">@Target(ElementType.TYPE)</span><br />
<span style="color: #134f5c;">public @interface MyServlet { </span><br />
<span style="color: #134f5c;"> String path() ;</span><br />
<span style="color: #134f5c;">} </span><br />
<br />
The annotation applies to classes and is used as<br />
@MyServlet(path = "/someuri")<br />
<br />
<u><b>Step 2: A Demo servlet</b></u><br />
HelloWorldServlet.java<br />
<span style="color: #134f5c;">@MyServlet(path = "/greeting")<br />public class HelloWorldServlet extends HttpServlet {<br /> </span><br />
<span style="color: #134f5c;"> protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {<br /> PrintWriter p = response.getWriter() ;<br /> p.write("<html><body> hello world </body></html>");<br /> p.close();<br /> }<br /> <br />}</span><br />
<br />
This is a simple hello servlet that we discover and register. Nothing needs to be added to web.xml.<br />
<br />
<u><b>Step 3: WebContainerInitializer</b></u><br />
WebContainerInitializer.java <br />
This is the implementation of ServletContainerInitializer. <br />
<br />
<span style="color: #134f5c;">@HandlesTypes({MyServlet.class})<br />public class WebContainerInitializer implements ServletContainerInitializer {<br /><br /> public void onStartup(Set<class>> classes, ServletContext ctx)<br /> throws ServletException {<br /> <br /> for (Class c : classes) {<br /> MyServlet ann = (MyServlet)c.getAnnotation(MyServlet.class) ; <br /> ServletRegistration.Dynamic d = ctx.addServlet("hello", c) ;<br /> d.addMapping(ann.path()) ;<br /> <br /> }<br /><br /> }</class></span><br />
<br />
The implementation needs to be in separate jar and included as a jar in the lib directory of the application war. WebContainerInitializer is annotated with @HandleTypes that takes MyServlet.class as parameter. When the application starts, the servlet container finds all classes that are annotated with MyServlet and passes them to the onStartup method. In the onStartup method, we go through each class found by the container, get the value of the path attribute from the annotation and register the servlet.<br />
<br />
To make this work, we need one more thing, which is in the META-INF/services directory, a file whose name is javax.servlet.ServletContainerInitializer, which contains 1 line com.mj.WebContainerInitializer. If you are wondering why this is required, please see my <a href="http://khangaonkar.blogspot.com/2014/09/discovering-third-party-apispi.html">this blog</a>.<br />
<br />
<u><b>Step 4: Build and run the app</b></u><br />
<br />
To build,<br />
<i>cd containerlib</i><br />
<i>mvn clean install</i><br />
<i>cd dynamichello</i><br />
<i>mvn clean install</i><br />
<br />
This builds dynamichello/target/dynamichello.war that can be deployed to tomcat or any servlet container.<br />
When the application starts, you will see the following messages in the log<br />
<br />
<span style="color: #c27ba0;"><span style="background-color: white;">Initializing container app .....<br />Found ...com.mj.servlets.HelloWorldServlet<br />path = /greeting </span></span><br />
<br />
Point you browser to http://localhost:8080/hello/greeting.<br />
<br />
The servlet will respond with a hello message. <br />
<br />
In summary, this technique can be used to dynamically discover classes during application startup. This is typically used to implement libraries or containers such as JAX-RS implementation. This allows implementations to be provided by different developers. There is no hard wiring. </div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-30448705627054925952014-09-20T17:00:00.000-07:002014-09-20T17:00:50.101-07:00Discovering third party API/SPI implementations using java.util.ServiceLoader<div dir="ltr" style="text-align: left;" trbidi="on">
One interface, many implementations is a very well known object oriented programming paradigm. If you write the implementations yourself then you know what those implementations are and you can write a factory class or method that creates and returns the right implementation. You might also make this config driven and inject the correct implementation based on configuration.<br />
<br />
What if third parties are providing implementations of your interface? If you know those implementations in advance, then you could do the same as in the case above. But one downside is that code change is required to add or use new implementations or to remove them. You could come up with a configuration file, where implementations are listed and your code uses the list to determine what is available. Downside is that configuration has to be updated by you and this is non standard approach, in that, every API developer could come up with his own format for the configuration. Fortunately JAVA has a solution.<br />
<br />
In JDK6, they introduced <a href="http://docs.oracle.com/javase/7/docs/api/java/util/ServiceLoader.html">java.util.ServiceLoader</a>, a class for discovering and loading classes.<br />
<br />
It has a static load method that can be used to create a ServiceLoader that will find and load all of a particular Type.<br />
<br />
<span style="color: blue;">public static<T><t> ServiceLoader</t><t><T</t>><t><t> load(Class</t></t><t><t><T</t></t>><t><t><t> service)</t></t></t></span><br />
<br />
You would use it as<br />
<span style="color: blue;">ServiceLoader</span><span style="color: blue;"><SortProvider</span><span style="color: blue;"><sortprovider><span style="color: blue;">></span> sl = ServiceLoader.load(SortProvider.class) ; </sortprovider></span><br />
This creates a ServiceLoader that can find and load every SortProvider in the classpath.<br />
<br />
The Iterator method returns an Iterator to the implementations founds that will be loaded lazily.<br />
<span style="color: blue;">Iterator</span><span style="color: blue;"><SortProvider</span><span style="color: blue;"><span style="color: blue;">></span></span><span style="color: blue;"><sortprovider> it_sl = sl.Iterator() ;</sortprovider></span><br />
<br />
You can iterate over what is found and store it in a Map or somewhere else in memory.<br />
<span style="color: blue;">while (its.hasNext()) {</span><br />
<span style="color: blue;"> SortProvider sp = its.next() ;</span><br />
<span style="color: blue;"> log("Found provider " + sp.getProviderName()) ;</span><br />
<span style="color: blue;"> sMap.put(sp.getProviderName(),sp) ;</span><br />
<span style="color: blue;">} </span><br />
<br />
How does ServiceLoader know where to look ?<br />
<ul style="text-align: left;">
<li>Implementors package their implementation in a jar</li>
<li>jar should have a META-INF/services directory</li>
<li>services directory should have a file whose name is the fully qualified name of the Type</li>
<li>file has a list of fully qualified name of implementations of type</li>
<li>jar is installed to the classpath</li>
</ul>
I have a complete API/SPI example for a Sort interface below that you can download at <a href="https://github.com/mdkhanga/my-blog-code">https://github.com/mdkhanga/my-blog-code</a>. This sample is in msort directory. You should download the code first, so that you can look at code while reading the text below. This example illustrates how ServiceLoader is used to discover implementations from third party service providers. Sort interface can be used for sorting data. Service providers can provide implementations of various Sort algorithms. In the example,<br />
<br />
1. com.mj.msort.Sort is the main Sort API. It has 2 sort methods. One for Arrays and one of <br />
collections. 2 implementations are provided - bubblesort and mergesort. But anybody can write additional implementations.<br />
<br />
2. com.mj.msort.spi.SortProvider is the SPI.Third party implementors of Sort must also implement the SortProvider interface. The SPI provides another layer of encapsulation. We don't want to know the implementation details. We just want an instance of the implementation.<br />
<br />
3. SPI providers need to implement Sort and SortProvider. <br />
<br />
4. com.mj.msort.SortServices is a class that can discover and load SPI implementations and make them available to API users. It uses java.util.ServiceLoader to load SortProviders. Hence SortProvider also needs to be packaged as required by java.util.ServiceLoader for it to be discovered.<br />
<br />
This is the class that brings everything together. It uses ServiceLoader to find all implementations of SortProviders and stores them in a Map. It has a getSort method that programmers can call to get a specific implementation or whatever is there. <br />
<br />
5. Sample Usage<br />
<br />
Sort s = SortServices.getSort(...<br />
s.sort(...<br />
<br />
In summary, ServiceLoader is a powerful mechanism to find and load classes of a type. It can used to build highly extensible and dynamic services. As an additional exercise, you can create your own implementation of SortProvider in your own jar and SortServices will find it as long as it is on the classpath.<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-69756689537065671162014-08-26T18:34:00.000-07:002014-08-26T18:34:03.036-07:00Android programming tutorial<div dir="ltr" style="text-align: left;" trbidi="on">
Android is an open source linux based operating system for mobile devices like smart phones, tables and other devices. The purpose of this blog is to introduce a developer to android development. There are already many tutorials for Android. So why another ? Mobile development is fun and easy. But despite lots of documentation from Google and several blogs, the initial startup for new developer is not easy. There is substantial trial and error even for the experienced programmer before you get comfortable with the development process.<br />
<br />
In the rest of the blog I will<br />
<ul style="text-align: left;">
<li>Describe some android application concepts</li>
<li>Describe what SDKs and tools you need to download</li>
<li>Develop a very simple android application.</li>
</ul>
This blog will be most useful when used in conjunction with the official <a href="http://developer.android.com/index.html">Android developers documentation</a>. There are new terms like Activity or Layout that I describe only briefly. You should read more about it from the original documentation.<br />
<br />
<u><b>Concepts</b></u><br />
<br />
<ul style="text-align: left;">
<li>Android applications are mostly developed in JAVA.</li>
<li>Android development is like any other event driven UI development. Layout UI elements on the screen and write code to handle event like user tapping a button or a menu option.</li>
<li>An activity is a single screen of an application that a user interacts with. </li>
<li>An application may have many activities. Each activity has a layout that describes how the user interface widgets are layed out on the screen.</li>
<li>Activities communicate by sending Intents to each other. For example, if by clicking a button, a particular screen needs to replace the current one, the current activity will send an intent to the one that needs to come to the foreground.</li>
<li>Android SDK supports all the UI elements like text boxes, buttons, lists , menus, action bar etc that are necessary to build a UI.</li>
<li>The layouts determine how the UI elements are positioned on the screen respective to each other. With LinearLayout, the UI elements are positioned one after the other. With RelativeLayout, the UI elements are positioned relative to one another.</li>
<li>Additionally, there are APIs</li>
<ul>
<li>to store data to a file or to a local SQLite relational database.</li>
<li>to phone other devices.</li>
<li>to send text messages to other devices.</li>
<li>to send messages to other applications.</li>
</ul>
<li>Using HTTP, REST or other general purpose client libraries, you can make requests to remote servers. </li>
<li>Most of the time, any JAVA library that you can use in any JAVA application is generally usable in Android. ( of course sometimes there are issues such as supported JDK versions)</li>
<ul>
</ul>
</ul>
<u><b>Required Tools </b></u><br />
<ul style="text-align: left;">
<li>JAVA SDK </li>
<li><a href="http://developer.android.com/sdk/installing/studio.html">Android Studio</a></li>
<ul>
<li>This has the Android SDK and an IntelliJ based IDE.</li>
<li>You could also use the eclipse ADT or just the plain SDK with command line.</li>
<li>For this tutorial I have used Android studio 0.8.2.</li>
</ul>
<li>Optional - A mobile device</li>
<ul>
<li>Android SDK has emulators that you can run the app on. But they are slow.</li>
<li>Running on a real device gives more satisfaction. I used a Nexus 7. </li>
</ul>
<li>Optional - Download the source code for the tutorial below from https://sites.google.com/site/khangaonkar/home/android </li>
<ul>
</ul>
</ul>
In the rest of the blog we will work through a very simple tutorial to develop an android application.<br />
<br />
<u><b>Tutorial </b></u><br />
<br />
<i><b>Step 1: Download the android SDK</b></i><br />
<br />
Download the android SDK from http://developer.android.com/sdk/installing/index.html. The SDK is available in 3 flavors : eclipse ADT , android studio (intelliJ) and commandline. For this tutorial, I used android studio because that seems to be the recommended direction from google. But (except on MacOs) eclipse works fine as well.<br />
<br />
<i><b>Step 2 : Create a new project</b></i><br />
<br />
Start Android Studio<br />
Select File > New Project<br />
Enter Application name and click next<br />
Accept default for form factors and click next<br />
Select the default blank activity and hit next<br />
Select the defaults for the new activity and hit finish<br />
<br />
You should see a project as shown below<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinsCiwMMODW-ddE5qRPO4WZw8QsA9DpP8lw0JpIEv-LaeGLzpenVYdVkpoGtkm0if_TQjPNg97K83bLJ8dm3JdHi4ZatAwq2cDc684ZHKh-wJLKg1exf0jSPR1FTRSdVkZQml1MAkexr0/s1600/Screen+Shot+2014-08-20+at+5.54.43+PM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinsCiwMMODW-ddE5qRPO4WZw8QsA9DpP8lw0JpIEv-LaeGLzpenVYdVkpoGtkm0if_TQjPNg97K83bLJ8dm3JdHi4ZatAwq2cDc684ZHKh-wJLKg1exf0jSPR1FTRSdVkZQml1MAkexr0/s1600/Screen+Shot+2014-08-20+at+5.54.43+PM.png" height="226" width="400" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<i><b>Step 3: Create an emulator</b></i><br />
An emulator lets you test your application on a variety of devices without actually having the device. Let us create a Nexus 7 emulator.<br />
<br />
Click Tools > Android > AVD Manager<br />
Click create and enter the information as shown below<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjycV-MhsWaogU9wV4jOn3s3GWkDbFnrfc2qR_YjAJp4hCy65hBjnAfTW6ssFvDbM57VXRmftTInRQZiWQ2Rs6jmUq0mc6V8skCRPh6rao_PYI_TpwlgTx3DAgwss0XDtRZtQVAfQhtHN4/s1600/Screen+Shot+2014-08-20+at+6.04.04+PM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjycV-MhsWaogU9wV4jOn3s3GWkDbFnrfc2qR_YjAJp4hCy65hBjnAfTW6ssFvDbM57VXRmftTInRQZiWQ2Rs6jmUq0mc6V8skCRPh6rao_PYI_TpwlgTx3DAgwss0XDtRZtQVAfQhtHN4/s1600/Screen+Shot+2014-08-20+at+6.04.04+PM.png" height="306" width="400" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Click Ok<br />
Select the created device and hit Start<br />
This will a take a couple of minutes. The emulators are slow. Eventually you will see the window shown below<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijC3J5wXOSlV2Z0W6i2p3T6Xlnkp0S6SIebSJr7es9qR1QsZK5nnB3QPP8SQhQLcLwStyrbTJ1phY7wrKoaund-1GNS7faVOYEKsAqwzv-_LcIs7wAfHhU2JltC11Q9d6c-0YeYN2iMHA/s1600/Screen+Shot+2014-08-20+at+6.06.43+PM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijC3J5wXOSlV2Z0W6i2p3T6Xlnkp0S6SIebSJr7es9qR1QsZK5nnB3QPP8SQhQLcLwStyrbTJ1phY7wrKoaund-1GNS7faVOYEKsAqwzv-_LcIs7wAfHhU2JltC11Q9d6c-0YeYN2iMHA/s1600/Screen+Shot+2014-08-20+at+6.06.43+PM.png" height="200" width="155" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In the main project, in the lower window, you should see that the emulator is detected.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6DI3S2LOHzv-SUOQyIxdYwOy8_cO1eb6c5sCYlzdE1iPAhY6EAPBdgknEJlszIrmwCTOnB5MT13cGt-lHIF0FiHJ-x2ZEnP2MDhbfTMKp9Lj_zJ0NAkk3y_FsQSzURoC4MC85qxbAqFY/s1600/Screen+Shot+2014-08-20+at+6.09.35+PM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6DI3S2LOHzv-SUOQyIxdYwOy8_cO1eb6c5sCYlzdE1iPAhY6EAPBdgknEJlszIrmwCTOnB5MT13cGt-lHIF0FiHJ-x2ZEnP2MDhbfTMKp9Lj_zJ0NAkk3y_FsQSzURoC4MC85qxbAqFY/s1600/Screen+Shot+2014-08-20+at+6.09.35+PM.png" height="105" width="200" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<span style="color: red;">Caution: Emulators are very slow and take a lot of time to start. The first time I install a new version of AppStudio or eclipse ADT, they almost never work. It takes a little bit of trial and error to get them going.</span><br />
<br />
<i><b>Step 4 : Run the application</b></i><br />
<br />
Click Run > Run App<br />
When prompted, Select the emulator<br />
The default apps shows hello world on the screen<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjau7w7zxjbRqRBDVni58og6tqcxMSBw3RkPp-2QdRJNZQ6rDP2P_vAkOmOvVprs9zXp51dMhzZQSHgTInjdiBDVyFAh97XXznSyrSVeMVC05-0a6NUIYxOtIdVGHhYyUFgTPb4QquN3v8/s1600/Screen+Shot+2014-08-20+at+6.12.52+PM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjau7w7zxjbRqRBDVni58og6tqcxMSBw3RkPp-2QdRJNZQ6rDP2P_vAkOmOvVprs9zXp51dMhzZQSHgTInjdiBDVyFAh97XXznSyrSVeMVC05-0a6NUIYxOtIdVGHhYyUFgTPb4QquN3v8/s1600/Screen+Shot+2014-08-20+at+6.12.52+PM.png" height="200" width="161" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<i><b>Step 5: Review generate files</b></i><br />
<br />
Under Greeting/app/ src/main/java is the class com.mj.greeting.MyActivity. This is the main class the represents the logic around what is shown on the screen.<br />
line 17 is setContentView(R.layout.activity_my);<br />
This line sets the layout that is displayed on the screen. The layout is defined as an xml file Greeting/apps/src/main/res/layout/activity_my.xml. The LayoutManager and any UI elements like editboxes , buttons etc and their properties are defined here. In this case, a RelativeLayout surrounds a Textview whose default value is Hello World.<br />
<i><b><br /></b></i>
<i><b>Step 6: Add some new code</b></i><br />
Let us add an edittext box and a button to the UI. The user can type a message in the editbox and then click the button. On clicking the message replaces what is displayed in the textview.<br />
<br />
In the file Greeting/apps/src/main/res/layout/activity_my.xml<br />
<br />
<span style="color: #741b47;">add an android:id to the relativelayout</span><br />
<relativelayout br="" xmlns:android="http://schemas.android.com/apk/res/android"> xmlns:tools="http://schemas.android.com/tools"<br /> <span style="color: blue;">android:id="@+id/main"</span> </relativelayout><br />
<br />
<span style="color: purple;">add an android:id to the textview</span><br />
<textview br=""> <span style="color: blue;">android:id="@+id/textview"</span><br /> android:text="@string/hello_world"</textview><br />
<br />
The ids will let us reference these widgets in code.<br />
<br />
<span style="color: purple;">Add an edittext box</span><br />
<edittext br=""></edittext><span style="color: blue;"><EditText</span><br />
<span style="color: blue;"><edittext br=""> android:id="@+id/edittext"<br /> android:layout_width="wrap_content"<br /> android:layout_height="wrap_content"<br /> android:layout_below="@id/textview"<br /> android:ems="10"<span style="background-color: white;"></span><br /> android:layout_marginTop="10dp"<br /> android:text="greeting" android:inputType="text" /> </edittext></span><br />
<br />
<span style="color: purple;">and a button</span><br />
<span style="color: blue;"><span style="background-color: white;"><Button<br /> android:id="@+id/button"<br /> android:layout_width="wrap_content"<br /> android:layout_height="wrap_content"<br /> android:layout_below="@+id/edittext"<br /> android:layout_marginTop="10dp"<br /> android:text="Update Greeting"<br /> android:onClick="onClick"/></span></span><br />
<br />
OnClick attribute references the method that is called when the user clicks the button. So we will need to add an onClick method implementation<br />
<span style="color: purple;"><br /></span>
<span style="color: purple;">To the class com.mj.greeting.MyActivity add the method</span><br />
<span style="color: blue;">public void onClick(View v) {</span><br />
<span style="color: blue;"> View main = this.findViewById(R.id.main) ; // get a reference to the current view<br /> EditText edit = (EditText) main.findViewById(R.id.edittext) ; // get a reference to the edittext<br /> TextView tv= (TextView) main.findViewById(R.id.textview) ; // get the textview<br /> tv.setText(edit.getText()); // get the text entered in edittext and put it in the textview<br /> }</span><br />
<br />
Run the application<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6IimPCiy_iIyk463yMOvvgVmq9FRtAB6a9w0HODW0VSBp4FgNMbvHnp32E3vvrFPRaBOoUakHAlLO7qRWXx3V1bR-dgrM02J5bI1f7T4IjmIVsdhoMUhXar_bh8sFGx7m54-2ULlg47o/s1600/Screen+Shot+2014-08-23+at+11.45.01+AM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6IimPCiy_iIyk463yMOvvgVmq9FRtAB6a9w0HODW0VSBp4FgNMbvHnp32E3vvrFPRaBOoUakHAlLO7qRWXx3V1bR-dgrM02J5bI1f7T4IjmIVsdhoMUhXar_bh8sFGx7m54-2ULlg47o/s1600/Screen+Shot+2014-08-23+at+11.45.01+AM.png" height="320" width="252" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<i><b>Step 7: Run on a real device</b></i><br />
<br />
So far we have been running the application on a emulator. It is much more fun to run on a real device. Enable USB debugging on your device. On the Nexus 7, USB debugging is enabled by selecting the option in Settings/Developer Options.<br />
<br />
Connect it to your development machine with a USB cable. do Run > Run App<br />
<br />
The application will be installed and run on the device.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyL5dV2w0PmpQmczvcGbzdNk4fCbhQYmrCk0_fvTq8vQGAFH1z3-kQjnEB_Y82KKX4LcUb9A-cOsLQ7OV7tmNmHvuJsrtKowUOFX_7v6B48K4IqimmntOaQJVsHCh4_ro4Ju1Xmmeyg0I/s1600/photo.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyL5dV2w0PmpQmczvcGbzdNk4fCbhQYmrCk0_fvTq8vQGAFH1z3-kQjnEB_Y82KKX4LcUb9A-cOsLQ7OV7tmNmHvuJsrtKowUOFX_7v6B48K4IqimmntOaQJVsHCh4_ro4Ju1Xmmeyg0I/s1600/photo.JPG" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
In summary, getting started with mobile development is simple and fun once you get comfortable with the concepts and tools. <br />
<br />
<br />
<br />
<br />
<br />
<br /></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0tag:blogger.com,1999:blog-5008017311510568944.post-45299847687724908392014-07-26T22:29:00.001-07:002014-07-26T22:29:43.045-07:00Distributed Systems : Consensus Protocols<div dir="ltr" style="text-align: left;" trbidi="on">
Modern software systems scale by partitioning the data and distributing data across several machines. Systems are made highly available by replicating data across multiple machines. When multiple systems are involved in managing state, they need to agree when a particular piece data needs to change.<br />
<br />
You are familiar with the concept of a transaction in a relational database. A transaction is a unit of work (like a insert or update or some combination of multiple statements) that as a whole can be committed and aborted. What if the work involves updating multiple databases that are on different machines ? To ensure consistent state across the system, all the databases should agree on what to do, whether to commit or abort the state change.<br />
<br />
Modern distributed NoSql databases have a similar but slightly different problem. If you had a single server and set a value v=8 in the server. There is no doubt what the value of v is. Any client that connects to the server reads the value as 8. What if you had a cluster of 3 servers ? Would a client connecting to one the servers see the value as 8 ? Consensus is required to ensure all servers agree on what the value of v is.<br />
<br />
Consider systems like Apache Zookeeper or Apache Cassandra. To ensure high availability, clients can connect to any node in the cluster and read or write data. To ensure consistency in the cluster, some consensus is required among the nodes in the cluster when state changes.<br />
<br />
In the rest of this blog we briefly cover some distributed protocols starting with two phase commit, which users of relational databases are very familiar with. We will then talk about Paxos , ZAB and Raft. Paxos became popular because it was used by google for its distributed systems. ZAB is used by Zookeeper which is an important component of the Hadoop echosystem. These protocols are hard to understand and no attempt is made to go into detail. The purpose is to introduce readers to some of the consensus concepts that are important in distributed systems.<br />
<br />
<u><b>1. Two phase commit</b></u><br />
<br />
Used in databases to ensure all participants in distributed updates either commit or abort the changes.<br />
One node called the co-ordinator originates the transaction. <br />
<br />
1.1 Co-ordinator sends a prepare message to all participants.<br />
1.2 Each participant replies with a yes if it can commit its part of the transaction or No otherwise.<br />
1.3 If the co-ordinator receives a yes from all participants, it sends a commit message to the participants. Otherwise it sends an abort message.<br />
1.4 If the participant receives a commit message, it commits its change. If it receives an abort message, it aborts the change. In both cases, it sends an acknowledgement back to the co-ordinator.<br />
1.5 Transaction is complete when the coordinator receives all acknowledgments.<br />
<br />
One limitation of this protocol is that if the co-ordinator crashes, the participants do not know whether to commit or abort the transaction, as they do not know how the other participants responded.<br />
<br />
<u><b>2. Three phase commit</b></u><br />
<br />
The protocol attempts to let the participants make progress even if the co-ordinator fails.<br />
<br />
2.1 Co-ordinator sends a prepare message to all participants.<br />
2.2 Each participant replies with a yes if it can commit its part of the transaction or No otherwise.<br />
2.3. If the co-ordinator receives yes from all of participants, it send a pre-commit message to all participants.<br />
2.4 When the co-ordinator receives an acknowledgment from a majority of participants, it sends a commit message to all participants.<br />
<br />
If the co-ordinator fails, the participants can communicate with each other and determine whether to commit or abort.<br />
<br />
<u><b>3. PAXOS</b></u><br />
<br />
Paxos was first published in that nineties but it became more popular after Google implemented and used it in its distributed infrastructure. The protocol is notorious for being difficult to understand. Below is a very brief description. See references for more details.<br />
<br />
There are nodes that propose values called proposers and that accept values called acceptor. <br />
<br />
3.1 A proposer with a value to propose submits a proposal (v,n) with value v and sequence number n.<br />
<br />
3.2. When an acceptor receives a proposal (v,.n), it compares it with the highest version proposal accepted for that value. If this proposal is higher version that any accepted proposal, the acceptor replies agree and sends the value of any previously accepted proposal. If the acceptor has already accepted a higher version, it rejects the current proposal.<br />
<br />
3.3 If the proposer receives agree from majority of acceptors, it can pick one of the values sent by the acceptors. If they acceptors have not sent any value, it can pick its own value. It then sends a commit message with the chosen value to acceptors. If majority reject or do not respond, abort this proposal and try another one.<br />
<br />
3.4 When the acceptor receives a commit message, it agrees to commit if the sequence number is the highest it has agreed to or if the value is the same as the last accepted proposal. Otherwise it rejects the commit.<br />
<br />
3.5 If a majority accept the commit, the proposal is complete. Otherwise abort and try again.<br />
<br />
Key takeaway is that majorities are used to accept proposal. If there are multiple proposers competing for a value, it is possible that no progress is made in accepting values. The solution is to elect a leader that proposes values. Other players in the system could be learners who learn about accepted values from either the leader or other participants.<br />
<u><b><br /></b></u>
<u><b>4. ZAB (Zookeeper Atomic Broadcast)</b></u><br />
<br />
ZAB was developed for use in Apache Zookeeper due to limitations in PAXOS. In Zookeeper , the order in which changes are applied in important. In PAXOS, it is possible that updates get applied by acceptors out of order.<br />
<br />
ZAB is similar to PAXOS in that a leader proposes values and values are accepted based on majority vote. The key difference is that strict order of updates is maintained. If the leader crashes and a new leader is elected, the updates are applied in the original order. <br />
<br />
<u><b>5. RAFT</b></u><br />
<br />
RAFT is another distributed consensus protocol that claims to be simpler that PAXOS or ZAB<br />
<br />
A node can either be a leader, follower or candidate.<br />
<br />
5.1 By default all nodes are followers. When there is no leader, a node can make itself a candidate for leadership and solicit votes.<br />
<br />
5.2 The candidate that gets majority votes is elected leader.<br />
<br />
5.3 A client submits its updates to the leader. Leader updates a log (uncommitted) and sends the update to followers.<br />
<br />
5.4 When leaders hears from a majority of followers that they have made the update, leader commits the change and informs the followers of the commit<br />
<br />
5.5 Followers commit the update.<br />
<br />
5.6 If a leader terminates for some reason, one of the followers turns itself into a candidate and gets elected as the leader.<br />
<br />
We have a given a brief description of some consensus protocols. If you use Hadoop, Cassandra, Kafka or similar distributed systems, you will run into these protocols. For more details, some references are provided below.<br />
<br />
<u><b>References:</b></u><br />
<br />
1. Database Management Systems by RamaKrishnan and Gehrke<br />
2. <span id="goog_1414697186"></span><a href="https://www.blogger.com/">PAXOS made simple<span id="goog_1414697187"></span></a><br />
3 <a href="http://angusmacdonald.me/writing/paxos-by-example/">PAXOS by example</a><br />
4. <a href="http://thesecretlivesofdata.com/raft/">The secret lives of data</a><br />
5. <a href="http://zookeeper.apache.org/">Apache Zookeeper </a><br />
6. <a href="http://the-paper-trail.org/blog/consensus-protocols-paxos/">Paxos paper trail</a><br />
7. <a href="http://raftconsensus.github.io/">Raft Consensus</a></div>
MJhttp://www.blogger.com/profile/13925926828045222969noreply@blogger.com0