Simple Single Ozone

Requirements
  • Working docker setup
  • AWS CLI (optional)

Ozone in a Single Container

The easiest way to start up an all-in-one ozone container is to use the latest docker image from docker hub:

docker run -p 9878:9878 -p 9876:9876 apache/ozone

This command will pull down the ozone image from docker hub and start all ozone services in a single container.
This container will run the required metadata servers (Ozone Manager, Storage Container Manager) one data node and the S3 compatible REST server (S3 Gateway).

Local multi-container cluster

If you would like to use a more realistic pseudo-cluster where each components run in own containers, you can start it with a docker-compose file.

We have shipped a docker-compose and an environment file as part of the container image that is uploaded to docker hub.

The following commands can be used to extract these files from the image in the docker hub.

docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
docker run apache/ozone cat docker-config > docker-config

Now you can start the cluster with docker-compose:

docker-compose up -d

If you need multiple datanodes, we can just scale it up:

docker-compose up -d --scale datanode=3

Running S3 Clients

Once the cluster is booted up and ready, you can verify its status by connecting to the SCM’s UI at http://localhost:9876.

The S3 gateway endpoint will be exposed at port 9878. You can use Ozone’s S3 support as if you are working against the real S3. S3 buckets are stored under the /s3v volume.

Here is how you create buckets from command line:

aws s3api --endpoint http://localhost:9878/ create-bucket --bucket=bucket1

Only notable difference in the above command line is the fact that you have to tell the endpoint address to the aws s3api command.

Now let us put a simple file into the S3 Bucket hosted by Ozone. We will start by creating a temporary file that we can upload to Ozone via S3 support.

ls -1 > /tmp/testfile

This command creates a temporary file that we can upload to Ozone. The next command actually uploads to Ozone’s S3 bucket using the standard aws s3 command line interface.

aws s3 --endpoint http://localhost:9878 cp --storage-class REDUCED_REDUNDANCY  /tmp/testfile  s3://bucket1/testfile
We can now verify that file got uploaded by running the list command against our bucket.
aws s3 --endpoint http://localhost:9878 ls s3://bucket1/testfile
http://localhost:9878/bucket1?browser Next >>