Mon. Dec 16th, 2019

Apache Kudu 1.11.1 releases, Hadoop data storage system

2 min read

Apache Kudu is Open Source software. A Kudu cluster stores tables that look just like tables you’re used to from relational (SQL) databases. A table can be as simple as an binary keyand value, or as complex as a few hundred different strongly-typed attributes.

Just like SQL, every table has a PRIMARY KEY made up of one or more columns. This might be a single column like a unique user identifier, or a compound key such as a (host, metric, timestamp) tuple for a machine time series database. Rows can be efficiently read, updated, or deleted by their primary key.

Kudu’s simple data model makes it breeze to port legacy applications or build new ones: no need to worry about how to encode your data into binary blobs or make sense of a huge database full of hard-to-interpret JSON. Tables are self-describing, so you can use standard tools like SQL engines or Spark to analyze your data.

Apache Kudu

Apache Kudu 1.11.1 released


Fixed Issues

  • Fixed an issue with distributing libnuma dynamic library with kudu-binary JAR artifact. Also, fixed the issue of statically compiling in libnuma.a into kudu-master and kudu-tserver binaries when building Kudu from source in release mode. The fix removes both numactl and memkind projects from Kudu’s thirdparty dependencies and makes the dependency on the libmemkind library optional, opening the library using dlopen() and resolving required symbols via dlsym() (see KUDU-2990).
  • Fixed an issue with kudu cluster rebalancer CLI tool crashing when running against a location-aware cluster if a tablet server in one location doesn’t contain a single tablet replica (see KUDU-2987).
  • Fixed an issue with connection negotiation using SASL mechanism when server FQDN is longer than 64 characters (see KUDU-2989).
  • Fixed an issue in the test harness of the kudu-binary JAR artifact. With this fix, kudu-master and kudu-tserver processes of the mini-cluster’s test harness no longer rely on the test NTP server to synchronize their built-in NTP client. Instead, the test harness relies on the local machine clock synchronized by the system NTP daemon (see KUDU-2994).