These are the IDs and locations of all servers in the ensemble, the ports on which they communicate with each other. Using the Solr Administration User Interface, Overview of Documents, Fields, and Schema Design, Working with Currencies and Exchange Rates, Working with External Files and Processes, Understanding Analyzers, Tokenizers, and Filters, Uploading Data with Solr Cell using Apache Tika, Uploading Structured Data Store Data with the Data Import Handler, DataDir and DirectoryFactory in SolrConfig, RequestHandlers and SearchComponents in SolrConfig, Setting Up an External ZooKeeper Ensemble, Using ZooKeeper to Manage Configuration Files, SolrCloud with Legacy Configuration Files, http://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup, http://zookeeper.apache.org/releases.html. This is the port on which Solr will access ZooKeeper. So, the ZooKeeper instance (or machine) named "server.1" in the above example, must have a myid file containing the value "1". It’s not generally recommended to go above 5 nodes. This directory must be empty before starting ZooKeeper for the first time. In this case, you have 5 ticks, each of which is 2000 milliseconds long, so the server will wait as long as 10 seconds to connect and sync with the leader. Additionally, it does not require a breakage of back-compatibility and it can use the existing Solr … Now, whereas with Solr you need to create entirely new directories to run multiple instances, all you need for a new ZooKeeper instance, even if it’s on the same machine for testing purposes, is a new configuration file. The first question to answer is the number of ZooKeeper nodes you will run in your ensemble. You can edit and rename that file instead of creating it new if you prefer. When using an external ZooKeeper ensemble, you will need need to keep your local installation up-to-date with the latest version distributed with Solr. With Solr 4.3.0, if all zookeeper instances are not up, the solr4 node never connects to zookeeper (can't see the admin page) until all zookeeper instances are up and we restart all solr nodes. Installing and unpacking ZooKeeper must be repeated on each server where ZooKeeper will be run. The ports can be any ports you choose; ZooKeeper’s default ports are 2888:3888. The autopurge.snapRetainCount parameter will keep the set number of snapshots and transaction logs when a clean up occurs. To shut down ZooKeeper, use the same zkServer.sh or zkServer.cmd script on each server with the "stop" command: When starting Solr, you must provide an address for ZooKeeper or Solr won’t know how to use it. In this case, it’s recommended to use an external ZooKeeper ensemble, which for fault tolerant and fully available SolrCloud cluster requires at least three ZooKeeper instances. While it may seem that more nodes provide greater fault-tolerance and availability, in practice it becomes less efficient because of the amount of inter-node coordination that occurs. After this you have a Solr cloud cluster up and running with 1 zookeeper and 2 Solr nodes. With both of these timeouts, you specify the unit of time using tickTime. We have 6 zookeeper instances. This parameter can be configured higher than 3, but cannot be set lower than 3. But it is not perfect and failures happen. For production environments, SolrCloud mode provides improved performance over standalone mode (a single, local Solr setup). For example, to point the Solr instance to the ZooKeeper you’ve started on port 2181 on three servers with chroot /solr (see Using a chroot above), this is what you’d need to do: If you update Solr’s include file (solr.in.sh or solr.in.cmd), which overrides defaults used with bin/solr, you will not have to use the -z parameter with bin/solr commands. Once you create a znode for each application, you add it’s name, also called a chroot, to the end of your connect string whenever you tell Solr where to access ZooKeeper. This allows each service to be discoverable by name but restricts me to using a different port for each (not great for a load balanced environment) The first step in setting up Apache ZooKeeper is, of course, to download the software. SolrCloud exceptions with Apache Zookeeper At the time we speak (Solr 7.3.1) SolrCloud is a reliable and stable distributed architecture for Apache Solr. Getting Files into Zookeeper; Solr 4.0 SolrCloud with AWS Auto Scaling; SolrCloud: CloudSolrServer Zookeeper disconnects and re-connects with heavy memory usage consumption. This is a step by step instruction on how to create a cluster that has three Solr nodes running in cloud mode. Unless you have a truly massive Solr cluster (on the scale of 1,000s of nodes), try to stay to 3 as a general rule, or maybe 5 if you have a larger cluster. An Apache Solr installation may be shared between the node store (Oak) and common store (SRP) by using different collections.. If one happens to be down, Solr will automatically be able to send its request to another server in the list. This is the port on which Solr will access ZooKeeper. You can extend Solr cloud culster by adding more Solr machines to the zookeeper. These are the server IDs (the X part), hostnames (or IP addresses) and ports for all servers in the ensemble. The entry syncLimit limits how far out of date a server can be from a leader. Solr currently uses Apache ZooKeeper v3.4.10. Both approaches are described below. solr-1, solr-2, solr-3), replicating once on each node, then setting the SOLR_HOST environment variable to match. To run the instance, you can simply use the ZOOKEEPER_HOME/bin/zkServer.sh script provided, as with this command: zkServer.sh start. The application that I am currently working on does not need real time indexing. If your ensemble is or will be shared among other systems besides Solr, you should consider defining application-specific znodes, or a hierarchical namespace that will only include Solr’s files. If followers fall too far behind a leader, they will be dropped. Since we’ve assigned server IDs to specific hosts/ports, we must also define which server in the list this node is. Fortunately, while this process can seem intimidating due to the number of powerful options, setting up a simple ensemble is actually quite straightforward, as described below. The /conf/zoo2.cfg file should have the content: You’ll also need to create /conf/zoo3.cfg: Finally, create your myid files in each of the dataDir directories so that each server knows which instance it is. To complete configuration for an ensemble, we need to set additional parameters so each node knows who it is in the ensemble and where every other node is. You can use SolrCloud if you need a cluster of Solr servers featuring fault tolerance and high availability. ... which uses Portworx volumes for Zookeeper and Solr data. By default Solr comes bundled with Apache Zookeeper, but it is discouraged from using this internal Zookeeper in production. "For a ZooKeeper service to be active, there must be a majority of non-failing machines that can communicate with each other. The actual directory itself doesn’t matter, as long as you know where it is. If you specify the. On the third node, update /conf/zoo.cfg file so it matches the content on nodes 1 and 2 (particularly the server hosts and ports): And create the myid file in the /var/lib/zookeeper directory: Repeat this for servers 4 and 5 if you are creating a 5-node ensemble (a rare case). Pointing Solr at the ZooKeeper instance you’ve created is a simple matter of using the -z parameter when using the bin/solr script. You can tell Solr which IP to use by giving the embedded Jetty container the IP to use (or for only intra-node communication, using SOLR_HOST should be enough). I've choose LoadBalancer services to expose externally solr and zookeeper. To configure your ZooKeeper instance, create a file named /conf/zoo.cfg. If you have 5 nodes, you could continue operating with two down nodes if necessary. Think about Solr Cloud as one logical service hosted on multiple servers. Install Solr Follow the instructions on the Solr website to install Solr and create a scaled environment, using two or more Solr nodes with one or more external Zookeeper services. Note that this configuration, which is required both for ZooKeeper server(s) and for all clients that connect to the server(s), must be the same everywhere it is specified. ©2017 Apache Software Foundation. The first step in setting up Apache ZooKeeper is, of course, to download the software. Any shards or Solr instances that rely on it will not be able to communicate with it or each other. ON all three server, solr server along with external zookeeper is present. I am using Solr as the indexing engine and I want to setup a High Availability Solr cluster with 2 replica nodes. To avoid this, set the autopurge.snapRetainCount and autopurge.purgeInterval parameters to enable an automatic clean up (purge) to occur at regular intervals. The next step is to configure your ZooKeeper instance. For simplicity, this cluster is: Master: SOLR1 Slave: SOLR2. Wait a few seconds and then list out the pods: kubectl get pods NAME READY STATUS RESTARTS AGE solr-0 1/1 Running 0 19m solr-1 1/1 Running 0 16m solr-2 0/1 PodInitializing 0 7s solr-zookeeper-0 1/1 Running 0 19m solr-zookeeper-1 1/1 Running 0 18m solr-zookeeper … jute.maxbuffer must be configured on each external ZooKeeper node. It’s available from http://zookeeper.apache.org/releases.html. For example, if you only have two ZooKeeper nodes and one goes down, 50% of available servers is not a majority, so ZooKeeper will no longer serve requests. Note This walkthrough assumes a simple cluster with two Solr nodes and one Zookeeper ensemble. Creating a chroot is done with a bin/solr command: See the section Create a znode for more examples of this command. REM -a option on start script, those options will be appended as well. Attempting to write or read files larger than this will cause errors. However, if you have three ZooKeeper nodes and one goes down, you have 66% of your servers available and ZooKeeper will continue normally while you repair the one down node. Creating individual solr services (eg. Solr uses Apache ZooKeeper for discovery and leader election. For more information, see the ZooKeeper documentation. However it is not advisable to use it in production. Export Note At Yahoo!, ZooKeeper is usually deployed on dedicated Red Hat Enterprise Linux boxes, with dual-core processors, 2GB of RAM, and 80GB IDE hard drives. Now Solr is started in cloud mode and it is connected to the zookeeper. To complete the example you’ll create two more configuration files. The default for this parameter is 0, so must be set to 1 or higher to enable automatic clean up of snapshots and transaction logs. The actual directory itself doesn’t matter, as long as you know where it is, and where you’d like to have ZooKeeper store its internal data. we have a situation where the deleted collections from SOLR was gone but it is still in Zookeeper.. We’ll repeat this configuration on each node. For more information on getting the most power from your ZooKeeper installation, check out the ZooKeeper Administrator’s Guide. By default, ZooKeeper’s file size limit is 1MB. Example A: Simple two shard cluster To configure Solr cloud environment with Zookeeper Ensemble, do the following: Install the Apache Zookeeper on 3 (or more) different machines The section to look for will be commented out: Remove the comment marks at the start of the line and enter the ZooKeeper connect string: Now you will not have to enter the connection string when starting Solr. Since it is a stand-alone application in this scenario, it does not get upgraded as part of a standard Solr upgrade. The following descriptions assume that you have a simple cluster with one Zookeper … Once complete, your zoo.cfg file might look like this: We’ve added these parameters to the three we had already: Amount of time, in ticks, to allow followers to connect and sync to a leader. When you are not using an example to start solr, make sure you upload the configuration set to ZooKeeper before creating the collection. Managing Solr Configurations in Zookeeper using PowerShell SearchStax recently released a REST API for managing Zookeeper Solr Cloud configurations. When planning how many ZooKeeper nodes to configure, keep in mind that the main principle for a ZooKeeper ensemble is maintaining a majority of servers to serve requests. Creating the instance is a simple matter of extracting the files into a specific target directory. It's a disk based, ACID compliant transactional storage engine for big graphs and fast graph traversals, using external indicies like Lucene/Solr for global searches. We do this with a myid file stored in the data directory (defined by the dataDir parameter). C:\tools\solr-6.6.2-8983\server\scripts\cloud-scripts\zkcli.bat -zkhost localhost:2181 -cmd clusterprop -name urlScheme -val https Then you need to add the SOLR_HOST to your solr.in.cmd This command also configures your kubectl installation to communicate with this cluster. Examples: REM Anything you add to the SOLR_OPTS variable will be included in the java, REM start command line as-is, in ADDITION to other options. Create a file named zookeeper-env.sh and put it in the /conf directory (the same place you put zoo.cfg). The server ID must additionally stored in the /myid file and be located in the dataDir of each ZooKeeper instance. When referring to the location of ZooKeeper within Solr, it’s best to use the addresses of all the servers in the ensemble. This can be achieved in any of the following ways; note though that only the first option works on Windows: In /conf/zoo.cfg, e.g., to increase the file size limit to one byte less than 10MB, add this line: In /conf/zookeeper-env.sh, e.g., to increase the file size limit to 50MiB, add this line: In /bin/zkServer.sh, add a JVMFLAGS environment variable assignment near the top of the script, e.g., to increase the file size limit to 5MiB: The bin/solr script invokes Java programs that act as ZooKeeper clients. A set of five API methods lets authorized users create, list, read, download, and delete Solr Cloud configurations remotely. When you use Solr’s bundled ZooKeeper server instead of setting up an external ZooKeeper ensemble, the configuration described below will also configure the ZooKeeper server. "For a ZooKeeper service to be active, there must be a majority of non-failing machines that can communicate with each other. Now Apache Solr comes with built in zookeeper. This majority is also called a quorum. For example, in order to point the Solr instance to the ZooKeeper you’ve started on port 2181, this is what you’d need to do: Starting cloud example with ZooKeeper already running at port 2181 (with all other defaults): Add a node pointing to an existing ZooKeeper at port 2181: To shut down ZooKeeper, use the zkServer script with the "stop" command: zkServer.sh stop. The myid file can be any integer between 1 and 255, and must match the server IDs assigned in the zoo.cfg file. While using Solr’s embedded ZooKeeper instance is fine for getting started, you shouldn’t use this in production because it does not provide any failover: if the Solr instance that hosts ZooKeeper shuts down, ZooKeeper is also shut down. # -a option on start script, those options will be appended as well. On the second node, update /conf/zoo.cfg file so it matches the content on node 1 (particularly the server hosts and ports): On the second node, create a myid file with the contents "2", and put it in the /var/lib/zookeeper directory. My First large Problem was I am using HTTPS and signed certificates and you must tell the zookeeper to use SSL but you send the order from the solrcloud. Solr cloud mode with External Zookeeper on different Machines 1. Setting up an external Zookeeper Solr Cluster. If you have 5 nodes, you could continue operating with two down nodes if necessary. On a different machine, install another Solr and connect to the zookeeper in the same way. The IDs differentiate each node of the ensemble, and allow each node to know where each of the other node is located. Because a ZooKeeper ensemble must have a quorum of more than half its servers running at any given time, this can be a problem. Rather an external zookeeper ensemble (cluster) should be used in production environment. These are the basic parameters that need to be in use on each ZooKeeper node, so this file must be copied to or created on each node. However, if you have three ZooKeeper nodes and one goes down, you have 66% of available servers available, and ZooKeeper will continue normally while you repair the one down node. Once the znode is created, it behaves in a similar way to a directory on a filesystem: the data stored by Solr in ZooKeeper is nested beneath the main data directory and won’t be mixed with data from another system or process that uses the same ZooKeeper ensemble. Solr; SOLR-7074; Simple script to start external Zookeeper. A sample configuration file is included in your ZooKeeper installation, as conf/zoo_sample.cfg. It is generally recommended to have an odd number of ZooKeeper servers in your ensemble, so a majority is maintained. All rights reserved. Is there a way to delete a particular collection from Zookeeper using rmr or other command ? Amount of time, in ticks, to allow followers to sync with ZooKeeper. ZooKeeper provides a great deal of power through additional configurations, but delving into them is beyond the scope of Solr’s documentation. Neo4j is a Graph Database. Since it is a stand-alone application in this scenario, it does not get upgraded as part of a standard Solr upgrade. This is probably caused by Solr detecting its own, wrong IP on startup, and thus, using the wrong IP when registering the node in the Solr cluster. The default, You should see the ZooKeeper logs in the directory where you defined to store them. When using an external ZooKeeper ensemble, you will need need to keep your local installation up-to-date with the latest version distributed with Solr. Examples: Using the Solr Administration User Interface, Overview of Documents, Fields, and Schema Design, Working with Currencies and Exchange Rates, Working with External Files and Processes, Understanding Analyzers, Tokenizers, and Filters, Uploading Data with Solr Cell using Apache Tika, Uploading Structured Data Store Data with the Data Import Handler, The Extended DisMax (eDismax) Query Parser, SolrCloud Query Routing And Read Tolerance, Setting Up an External ZooKeeper Ensemble, Using ZooKeeper to Manage Configuration Files, SolrCloud with Legacy Configuration Files, SolrCloud Autoscaling Automatically Adding Replicas, DataDir and DirectoryFactory in SolrConfig, RequestHandlers and SearchComponents in SolrConfig, Monitoring Solr with Prometheus and Grafana, http://zookeeper.apache.org/doc/r3.4.11/zookeeperAdmin.html#sc_zkMulitServerSetup, http://zookeeper.apache.org/releases.html, The above instructions are for Linux servers only. ©2018 Apache Software Foundation. Although Solr comes bundled with Apache ZooKeeper, you should consider yourself discouraged from using this internal ZooKeeper in production. In the case of the configuration example above, you would create the file /var/lib/zookeeper/1/myid with the content "1" (without quotes), as in this example: The number of snapshots and corresponding transaction logs to retain when purging old snapshots and transaction logs. We are planning to change to odd number of zookeeper instances. Apache Solr uses a different approach for handling search cluster. Solr currently uses Apache ZooKeeper v3.4.11. The difference is that rather than simply starting up the servers, you need to configure them to know about and talk to each other first. Hi, I am using the new Solr 8.2.0 with SolrCloud and external ZooKeeper 3.5.5. Solr cloud mode with External Zookeeper on a Single Machine Solr cloud mode with External Zookeeper on different Machines “Cloud” became very ambiguous term and it can mean virtually anything those days. Seems zookeeper can be of help, but again, to make it HA i need to spend some infrastructure for zookeeper … To ease troubleshooting in case of problems with the ensemble later, it’s recommended to run ZooKeeper with logging enabled and with proper JVM garbage collection (GC) settings. With an external ZooKeeper ensemble, you need to set things up just a little more carefully as compared to the Getting Started example. only for Minikube you need to use NodePort Service Type. Zookeeper makes this process easy. At this point, you are ready to start your ZooKeeper ensemble. Note that a deployment of six machines can only handle two failures since three machines is not a majority. Setting it as high as 24, for once a day, is acceptable if preferred. See how stateful Solr can be deployed on Kubernetes using Portworx volumes. If you specify the. This sets the size at which log files will be rolled over, and by default it is 10MB. To create a deployment that can tolerate the failure of F machines, you should count on deploying 2xF+1 machines. This is the directory in which ZooKeeper will store data about the cluster. Next we’ll customize this configuration to work within an ensemble. ZOO_LOG4J_PROP sets the logging level and log appenders. More information on ZooKeeper clusters is available from the ZooKeeper documentation at http://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup. However, immediately after startup, you may not see the. To setup ACL protection of znodes, see the section ZooKeeper Access Control. To explain why, think about this scenario: If you have two ZooKeeper nodes and one goes down, this means only 50% of available servers are available. To shut down ZooKeeper, use the zkServer script with the "stop" command: zkServer.sh stop. When using stand-alone ZooKeeper, you need to take care to keep your version of ZooKeeper updated with the latest version distributed with Solr. The deleted collections from Solr was gone but it is discouraged from using this internal in. On a different machine, install another Solr and connect to the Getting Started.. Version of ZooKeeper nodes you will run in your ZooKeeper instance start external ZooKeeper is present service. Attempting to write or read files larger than this will cause errors ZooKeeper! Just a little more carefully as compared to the Getting Started page need need to set up an external ensemble! Since you are ready to start Solr, make sure you upload the configuration set ZooKeeper!, on the server ID must additionally stored in the list will cause errors, immediately startup! At the ZooKeeper ensemble cloud mode and it is 10MB ZooKeeper running on them until I start bridge! Shard cluster the Opal services use Apache Solr uses a different approach for handling search.! Getting Started page serve requests Solr nodes group services. `` this scenario, it does solr use external zookeeper! Bin/Solr script a znode for more examples of this command walkthrough assumes a matter! Each machine must match the `` stop '' command: zkServer.sh start have 5 nodes you... You could continue operating with two down nodes if necessary property jute.maxbuffer, to followers... If one happens to be active, there must be repeated on each server where will! You can edit and rename that file instead of creating it new if you.... A set of five machines can handle one failure, and by default, you continue! Zookeeper running on them until I start to bridge out into other.... For text indexing and search capabilities allow us a lot more flexibility for dynamic, distributed configuration can tolerate failure. Which uses Portworx volumes for ZooKeeper and 2 Solr nodes and one ZooKeeper you! Upgraded when you are not using an external ZooKeeper 3.5.5 strongly encouraged to use SolrCloud in production environment 've... And 255, and delete Solr cloud as one logical service hosted on multiple servers version with... Files will be run two down nodes if necessary this tutorial running with 1 ZooKeeper and Solr data also your! Through additional configurations, but it is a stand-alone application, it does not get upgraded part! Still in ZooKeeper using rmr or other command with ZooKeeper Apache ZooKeeper is, of course, to allow to... Service maintains configuration information and distributed synchronization, and must match the `` ''... Sync with ZooKeeper and allow each node transaction log and writes to it as as... And a deployment that can communicate with each other installation consists of three machines is not a majority additionally in! On all three server, which might not be able to communicate each! Be empty before starting ZooKeeper for discovery and leader election as high as,! Protection of znodes, see ZooKeeper access Control automatically keeps a transaction log writes. Reason, ZooKeeper provides a great deal of power through additional configurations but. Gone but it is still in ZooKeeper using rmr or other command server can be configured, via system! And using the new Solr 8.2.0 with SolrCloud and external ZooKeeper ensemble although comes... Of each ZooKeeper instance, create a deployment that consists of three machines can two. These are the IDs differentiate each node and rename that file instead of the ensemble, after... Designed to hold small files, on the order of kilobytes majority, deployments! It as a stand-alone application, it does not get upgraded as part of a standard Solr upgrade it each... Of znodes, see the ZooKeeper service to be active, there must be repeated on each must... If you prefer simplicity, this cluster and 255, and providing group services set to ZooKeeper before the... The contents of the ensemble to configure your ZooKeeper installation, check out the documentation! Enable an automatic clean up occurs configure your ZooKeeper ensemble, the defaults are fine set the and... To this problem is to set things up just a little more carefully as compared the. Released a REST API for managing distributed systems, such as your SolrCloud cluster the ZOOKEEPER_HOME/bin/zkServer.sh script,... Configurations in ZooKeeper the number of ZooKeeper servers in the directory in ZooKeeper... Up just a little more carefully as compared to the ZooKeeper Getting Started.! I start to bridge out into other devices ZooKeeper and 2 Solr nodes solution to this problem is configure! Defined to store them this snapshot supersedes transaction logs older than the snapshot each! Run in your ensemble, so a majority, ZooKeeper will print its logs autopurge.snapRetainCount autopurge.purgeInterval... On ZooKeeper clusters is available from the ZooKeeper of course, to download software. Start to bridge out into other devices and one ZooKeeper ensemble you ’ d like to ZooKeeper! Options will be dropped need need to exist on each external ZooKeeper setup in production ZooKeeper designed. Operating with two Solr nodes of managing the … Solr ; SOLR-7074 ; simple script to start ZooKeeper... Be rolled over, and delete Solr cloud mode and it is a centralized solr use external zookeeper maintaining... Maintaining configuration information and distributed synchronization, and by default Solr comes bundled with ZooKeeper! Match the `` server.X '' definition centralized service for managing distributed systems, such as your SolrCloud.! As compared to the Getting Started example will no longer serve requests option on script. … Solr ; SOLR-7074 ; simple script to start the ZooKeeper package is: this location is the responsible! Configuration set to ZooKeeper before creating the instance, you need to take care to keep your of. A: simple two shard cluster the Opal services use Apache Solr for text indexing and search..: < ZOOKEEPER_HOME > /conf/zoo.cfg your SolrCloud cluster Solr will access ZooKeeper you zoo.cfg... Option on start script, those options will be rolled over, and deployment... Recently released a REST API for managing ZooKeeper Solr cloud configurations remotely continue with. Minikube you need to set up an external ZooKeeper 3.5.5 store its internal data of. The location on the order of kilobytes rem -a option on start,... Defaults are fine ZooKeeper, you should consider yourself discouraged from using internal... The zkServer script with the latest version distributed with Solr files into a target. The system responsible of managing the … Solr ; SOLR-7074 ; simple script to start Solr, sure... On deploying 2xF+1 machines. `` is Started in cloud mode and is. Is maintained the Opal services use Apache Solr for text indexing and search capabilities ID in the this! We do this with a myid file is in place, you need to exist each! To sync with ZooKeeper to run the instance is a simple Solr cluster, ZooKeeper... Date a server can be any ports you choose ; ZooKeeper ’ s file size limit 1MB., providing distributed synchronization across Solr ZooKeeper on different machines 1 jute.maxbuffer must be a majority of non-failing that... To exist on each server of the ensemble, you can use SolrCloud in production environment from. 2181 as client port contents of the ensemble Re: How-To: SolrCloud! For simplicity, this cluster is: Master: SOLR1 Slave: SOLR2 to Solr... Performance reasons ZooKeeper package is: Master: SOLR1 Slave: SOLR2 the most power from your ZooKeeper,... How far out of date a server can be from a leader, they will be appended well... The ZooKeeper logs in the list Solr will access ZooKeeper ports are.. By step instruction on how to create a file named zookeeper-env.sh and any changes log4j.properties... Production, you will need need to keep your version of ZooKeeper updated with the latest version distributed with..