The YARN service registry can be used in a numbe of ways :-
A user of the registry may be both a publisher of entries —Service Records— and a consumer of other services located via their service records. Different parts of a distributed application may also use it for different purposes. As an example, the Application Master of a YARN application can publish bindings for use by its worker containers. The code running in the containers which can then look up the bindings to communicate with that manager even if it was restarted on different nodes in the cluster. Client applications can look up external service endpoints to interact with the AM via a public API.
The registry cannot be used:-
This record MAY have application attempt persistence policy of and an ID of the application attempt
yarn:persistence = "application_attempt" yarn:id = ${application_attemptId}
This means that the record will be deleted when the application attempt completes, even if a new attempt is created. Every Application attempt will have to re-register the endpoint —which may be needed to locate the service anyway.
Alternatively, the record MAY have the persistence policy of “application”:
yarn:persistence = "application_attempt" yarn:id = application_attemptId
The choice of path is an application specific one. For services with a YARN application name guaranteed to be unique, we recommend a convention of:
/users/${username}/applications/${service-class}/${instance-name}
Alternatively, the application Id can be used in the path:
/users/${username}/applications/${service-class}/${applicationId}
The latter makes mapping a YARN application listing entry to a service record trivial.
Client applications may locate the service
After locating a service record, the client can enumerate the external bindings and locate the entry with the desired API.
Here all containers in a YARN application are publishing service endpoints for public consumption.
They then a register service record on a path consisting of:
${base-path} + "/" + RegistryPathUtils.encodeYarnID(containerId)
This record should have the container persistence policy an ID of the container
yarn:persistence = "container" yarn:id = containerId
When the container is terminated, the entry will be automatically deleted.
The exported service endpoints of this container-deployed service should be listed in the external endpoint list of the service record.
Services which are generally fixed in a cluster, but which need to publish binding and configuration information may be published in the registry. Example: an Apache Oozie service. Services external to the cluster to which deployed applications may also be published. Example: An Amazon Dynamo instance.
These services can be registered under paths which belong to the users running the service, such as /users/oozie or /users/hbase. Client applications would use this path. While this can authenticate the validity of the service record, it does rely on the client applications knowing the username a service is deployed on, or being configured with the full path.
The alternative is for the services to be deployed under a static services path, under /services. For example, /services/oozie could contain the registration of the Oozie service. As the permissions for this path are restricted to pre-configured system accounts, the presence of a service registration on this path on a secure cluster, confirms that it was registered by the cluster administration tools.
Here YARN containers register with their AM to receive work, usually by some heartbeat mechanism where they report in regularly. If the AM is configured for containers to outlive the application attempt, when an AM fails the containers keep running. These containers will need to bind to any restarted AM. They may also wish to conclude that if an AM does not restart, that they should eventually time out and terminate themselves. Such a policy helps the application react to network partitions.
Management ports and bindings are simply others endpoint to publish. These should be published as internal endpoints, as they are not intended for public consumption.
A client application wishes to locate all services implementing a specific API, such as "classpath://org.apache.hbase"
This algorithm describes a depth first search of the registry tree. Variations are of course possible, including breadth-first search, or immediately halting the search as soon as a single entry point. There is also the option of parallel searches of different subtrees —this may reduce search time, albeit at the price of a higher client load on the registry infrastructure.
A utility class RegistryUtils provides static utility methods for common registry operations,in particular, RegistryUtils.listServiceRecords(registryOperations, path) performs the listing and collection of all immediate child record entries of a specified path.
Client applications are left with the problem of “what to do when the endpoint is not valid”, specifically, when a service is not running —what should be done?
Some transports assume that the outage is transient, and that spinning retries against the original binding is the correct strategy. This is the default policy of the Hadoop IPC client.
Other transports fail fast, immediately reporting the failure via an exception or other mechanism. This is directly visible to the client —but does allow the client to rescan the registry and rebind to the application.
Finally, some application have been designed for dynamic failover from the outset: their published binding information is actually a zookeeper path. Apache HBase and Apache Accumulo are examples of this. The registry is used for the initial lookup of the binding, after which the clients are inherently resilient to failure.