Monthly Archives: December 2014

hadoop + kerberos with GUI programs (eg RStudio with RHadoop)

While the setup from the previous posts works for the hadoop shell commands, you will still fail to access the remote cluster from GUI programs (eg RStudio) and/or with hadoop plugins like RHadoop.

There are two reasons for that:

  • GUI programs do not inherit your terminal/shell enviroment variables – unless you start them from a terminal session with
$ open /Applications/RStudio.app
  • $HADOOP_OPTS / $YARN_OPTS are not evaluated by other programs even if the variables are present in their execution environment.

The first problem is well covered by various blog posts. The main difficulty is only to find the correct procedure for your OSX version,since Apple has changed several times over the years:

  • using a .plist file in ~ /.MacOS (before Maverics)
  • using a setenv statement line /etc/launchd.conf (Mavericks)
  • using the launchctl setenv command (from Yosemite)

To find out which variable is used inside your GUI program or plugin may need some experimentation or look at the source. For java based plugins the variable _JAVA_OPTIONS which is always evaluated may be a starting point. For RHadoop package the more specific HADOOP_OPTS is already sufficient, so on yosemite:

$ launchctl setenv HADOOP_OPTS "-Djava.security.krb5.conf=/etc/krb5.conf"
# prefix command with sudo in case you want the setting for all users

If you need the setting only inside R/RStudio you could simply add the enviroment setting in your R scripts before initialising the RHadoop packages.

# wrapper script:  hadoop --config ~/remote-hadoop-conf
hadoop.command <- "~/scripts/remote-hadoop"

Sys.setenv(HADOOP_OPTS ="-Djava.security.krb5.conf=/etc/krb5.conf")
Sys.setenv(HADOOP_CMD=hadoop.command)

# load hdfs plugin for R
library(rhdfs)
hdfs.init()

# print remote hdfs root directory
print(hdfs.ls("/"))

Connect to a remote, kerberized hadoop cluster

To use a remote hadoop cluster with kerberos authentication you will need to get a proper krb5.conf file (eg from your remote cluster /etc/kerb5.conf) and place the file /etc/krb5.conf on your client OSX machine. To use this configurations from your osx hadoop client change your .[z]profile to:

export HADOOP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf"
export YARN_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf"

With java 1.7 this should be sufficient to detect the default realm, the kdc and also any specific authentication options used by your site. Please make sure the kerberos configuration is already in place when you obtain your ticket with

$ kinit

In case you got a ticket beforehand you may have to execute kinit again or login to local account again.

For the next step you will need to obtain the remote cluster configuration files (eg scp the config files from the remote cluster to a local directory, eg to ~/remote-hadoop-conf). The result should be a local copy similar to this:

$ ls -l  ~/remote-hadoop-conf

total 184
-rw-r--r--  1 dirkd  staff  4146 Jun 25  2013 capacity-scheduler.xml
-rw-r--r--  1 dirkd  staff  4381 Oct 21 11:44 core-site.xml
-rw-r--r--  1 dirkd  staff   253 Aug 21 11:46 dfs.includes
-rw-r--r--  1 dirkd  staff     0 Jun 25  2013 excludes
-rw-r--r--  1 dirkd  staff   896 Dec  1 11:44 hadoop-env.sh
-rw-r--r--  1 dirkd  staff  3251 Aug  5 09:50 hadoop-metrics.properties
-rw-r--r--  1 dirkd  staff  4214 Oct  7  2013 hadoop-policy.xml
-rw-r--r--  1 dirkd  staff  7283 Nov  3 16:44 hdfs-site.xml
-rw-r--r--  1 dirkd  staff  8713 Nov 18 16:26 log4j.properties
-rw-r--r--  1 dirkd  staff  6112 Nov  5 16:52 mapred-site.xml
-rw-r--r--  1 dirkd  staff   253 Aug 21 11:46 mapred.includes
-rw-r--r--  1 dirkd  staff   127 Apr  4  2014 taskcontroller.cfg
-rw-r--r--  1 dirkd  staff   931 Oct 20 09:44 topology.table.file
-rw-r--r--  1 dirkd  staff    70 Jul  2 11:52 yarn-env.sh
-rw-r--r--  1 dirkd  staff  5559 Nov  5 16:52 yarn-site.xml

Then point your hadoop and hdfs command to this configuration:

$ hdfs --config ~/remote-hadoop-conf dfs -ls /

If all worked well, then you should see at this point the content of the remote hdfs directory and you will be ready to use the standard hdfs or hadoop commands remotely.