Running concurrently with HDFS

Ozone is designed to work with HDFS. So it is easy to deploy ozone in an existing HDFS cluster.

The container manager part of Ozone can run inside DataNodes as a pluggable module or as a standalone component. This document describe how can it be started as a HDFS datanode plugin.

To activate ozone you should define the service plugin implementation class.

<property>
   <name>dfs.datanode.plugins</name>
   <value>org.apache.hadoop.ozone.HddsDatanodeService</value>
</property>

You also need to add the jar file under path /opt/ozone/share/ozone/lib/ to the classpath:

export HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/*.jar

To start ozone with HDFS you should start the the following components:

  1. HDFS Namenode (from Hadoop distribution)
  2. HDFS Datanode (from the Hadoop distribution with the plugin on the classpath from the Ozone distribution)
  3. Ozone Manager (from the Ozone distribution)
  4. Storage Container Manager (from the Ozone distribution)

Please check the log of the datanode whether the HDDS/Ozone plugin is started or not. Log of datanode should contain something like this:

2018-09-17 16:19:24 INFO  HddsDatanodeService:158 - Started plug-in org.apache.hadoop.ozone.web.OzoneHddsDatanodeService@6f94fb9d
Next >>