Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

PowerScale OneFS HDFS Configuration Guide

Configure HDFS transparent data encryption (CLI)

Configure HDFS TDE using the OneFS command-line interface (CLI). Read the following workflow before you begin.

  1. On the Hadoop client, create an encryption zone key on the Key Management Server (KMS) responsible for generating encryption keys for the files. Note that the keyadmin user can be used to perform key creation.
     ./hadoop key create key1 -provider <provider-path>
    For example:
    hadoop key create key5 -provider kms://http@ambari100-c.test.isilon.com:9090/kms

    If you do not want to add the -provider option to the necessary hadoop key <operation> command, find your environment below and set the hadoop.security.key.provider.path property.

    Table 1. KMS SpecificationsThe following table displays the specifications for each KMS.
    KMS HDP < 2.6.x HDP >= 3.0.1
    Ranger KMS When you add the Ranger KMS, you will be prompted with the recommended settings for the KMS provider. The property is set automatically in HDFS > Configs > Advanced > Advanced core-site. If the property is not configured automatically, add it to the custom core-site.xml file. Set the property in HDFS > Configs > Advanced > Custom core-site. Set property in Ambari > Services > OneFS > Configs > Advanced > Custom core-site
    Other KMS Set the property in HDFS > Configs > Advanced > Custom core-site. If the property is not configured automatically, add it to the custom core-site.xml file. Set property in Ambari > Services > OneFS > Configs > Advanced > Custom core-site

    Steps:

    HDP version < 2.6.x -- not using Ranger KMS

    1. Navigate to HDFS > Configs > Advanced > Custom core-site.
    2. Click Add Property.
    3. Enter the property as: hadoop.security.key.provider.path=kms://<kms-url>/kms

      For example,

      hadoop.security.key.provider.path=kms://http@m105.solarch.lab.emc.com:1688/kms
    4. Click Add.
    5. Save settings.

    HDP version 3.0.1 or later -- any KMS

    1. Navigate to Ambari > Services > OneFS > Configs > Advanced > Custom core-site.
    2. Click Add Property.
    3. Enter the property as: hadoop.security.key.provider.path=kms://<kms-url>/kms

      For example,

      hadoop.security.key.provider.path=kms://http@m105.solarch.lab.emc.com:9292/kms
    4. Click Add.
    5. Save settings.
    Authorization Exception Errors

    The OneFS Key Management Server configuration is configured per zone. If you receive an Authorization Exception error similar to the following:

    key1 has not been created. org.apache.hadoop.security.authorize.AuthorizationException: User:hdfs not allowed to do 'CREATE_KEY' on 'key1'
    then log into Ranger as the keyadmin user and perform the following step. Note that the default password is keyadmin.
    • Click on the KMS instance and edit the user you want to allow key administration privileges and then save the changes.

    Note that this example uses the Ranger KMS server. Follow similar procedures for other KMS servers to fix user authorization issues.

  2. On the OneFS cluster, configure the KMS URL, create a directory, and make it an encryption zone. The encryption zone must be somewhere within the HDFS root directory for that zone.
    isi hdfs crypto settings modify --kms-url<string>
    
    isi hdfs crypto encryption-zones create <path><keyname>
    For example:
    isi hdfs crypto settings modify --kms-url=http://m105.solarch.lab.emc.com:9292 --zone=hdfs -v
    
    isi hdfs crypto settings view --zone=hdfs
    
    isi hdfs crypto encryption-zones create --path=/ifs/hdfs/A --key-name=keyA --zone=hdfs -v
    When you run the isi hdfs crypto settings command, note in the output that Port 9292 is specific to the Ranger KMS server. If you are using a different KMS server, a different port may be required. Do not use Port 6080, as this port is for the Ranger UI only.
    The HDFS root path in the example above is /ifs/hdfs. Change this to your HDFS root path followed by the empty directory corresponding to your encryption zone. For example, /ifs/hdfs/A as in the example above.
    Important: Do not create the encryption zone from a DFS client in the Hadoop cluster. The encryption zone must be created using the OneFS CLI as shown above, otherwise you will see an error similar to the following on the console and in the OneFS hdfs.log file:
    RemoteException: Unknown RPC: createEncryptionZone
  3. List the encryption zones.
    isi hdfs crypto encryption-zones list
    With the encryption zone defined on the OneFS cluster, you will be able to list the encryption zone immediately from any DFS client in the Hadoop cluster.
  4. The next step is to test the reading/writing of a file to the created encryption zone by an authorized user.
    The ambari-qa user is the default smoke-test user that comes with HDP. For this test, the KMS server is updated to allow the ambari-qa user to obtain the keys and metadata as well as to generate and decrypt encryption keys. With the policy updated on the KMS server for the ambari-qa user, you can proceed to test writing and reading a test.txt file from the created encryption zone on OneFS (for example, to the /A encryption zone as in the previous example) from a DFS client in the Hadoop cluster as the ambari-qa user.
    cat test.txt
    hdfs dfs -put test.txt /A
    hdfs dfs -ls /A
    hdfs dfs -cat /A/test.txt
  5. Verify that the test file is actually encrypted on the OneFS cluster by logging into OneFS as the root administrator and displaying the contents of the test file in the test directory/ifs/hdfs/A in our example .
    cd /ifs/hdfs/A
    ls
    cat test.txt
    Result: You should see that the contents of the test file are encrypted and the original text is not displayed even by the privileged root user on OneFS. The test file created by the ambari-qa user has read permissions for both the Hadoop group and everyone, since the "hive" user is defined in the KMS with decrypt privileges. The "hive" user can decrypt the file created by the ambari-qa user, but cannot place any files into the encryption zone, in this case /A, since the write permission is missing for the Hadoop group that the "hive" user is a member of.
  6. If you do the same test with a user not defined in the KMS for the specified encryption zone, for example the "mapred" user, reading of the test file is denied as shown in the following example.
    hdfs dfs -cat /A/test.txt
    
    cat: user:mapred not allowed to do 'DECRYPT_EEK' on 'keyA'
  7. If you need to delete the encryption zone, deleting the encryption zone directory on OneFS is sufficient to delete the encryption zone.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\