Hadoop KMS is a cryptographic key management server based on Hadoop’s KeyProvider API.
It provides a client and a server components which communicate over HTTP using a REST API.
The client is a KeyProvider implementation interacts with the KMS using the KMS HTTP REST API.
KMS and its client have built-in security and they support HTTP SPNEGO Kerberos authentication and HTTPS secure transport.
KMS is a Java web-application and it runs using a pre-configured Tomcat bundled with the Hadoop distribution.
The KMS client KeyProvider uses the kms scheme, and the embedded URL must be the URL of the KMS. For example, for a KMS running on http://localhost:16000/kms, the KeyProvider URI is kms://http@localhost:16000/kms. And, for a KMS running on https://localhost:16000/kms, the KeyProvider URI is kms://https@localhost:16000/kms
The following is an example to configure HDFS NameNode as a KMS client in core-site.xml:
<property> <name>hadoop.security.key.provider.path</name> <value>kms://http@localhost:9600/kms</value> <description> The KeyProvider to use when interacting with encryption keys used when reading and writing to an encryption zone. </description> </property>
Configure the KMS backing KeyProvider properties in the etc/hadoop/kms-site.xml configuration file:
<property> <name>hadoop.kms.key.provider.uri</name> <value>jceks://file@/${user.home}/kms.keystore</value> </property> <property> <name>hadoop.security.keystore.java-keystore-provider.password-file</name> <value>kms.keystore.password</value> </property>
The password file is looked up in the Hadoop’s configuration directory via the classpath.
NOTE: You need to restart the KMS for the configuration changes to take effect.
NOTE: The KMS server can choose any KeyProvider implementation as the backing provider. The example here uses a JavaKeyStoreProvider, which should only be used for experimental purposes and never be used in production. For detailed usage and caveats of JavaKeyStoreProvider, please see Keystore Passwords section of the Credential Provider API.
KMS has two kinds of caching: a CachingKeyProvider for caching the encryption keys, and a KeyProvider for caching the EEKs.
KMS caches encryption keys for a short period of time to avoid excessive hits to the underlying KeyProvider.
This Cache is enabled by default (can be disabled by setting the hadoop.kms.cache.enable boolean property to false)
This cache is used with the following 3 methods only, getCurrentKey() and getKeyVersion() and getMetadata().
For the getCurrentKey() method, cached entries are kept for a maximum of 30000 milliseconds regardless the number of times the key is being accessed (to avoid stale keys to be considered current).
For the getKeyVersion() method, cached entries are kept with a default inactivity timeout of 600000 milliseconds (10 mins).
These configurations can be changed via the following properties in the etc/hadoop/kms-site.xml configuration file:
<property> <name>hadoop.kms.cache.enable</name> <value>true</value> </property> <property> <name>hadoop.kms.cache.timeout.ms</name> <value>600000</value> </property> <property> <name>hadoop.kms.current.key.cache.timeout.ms</name> <value>30000</value> </property>
Architecturally, both server-side (e.g. KMS) and client-side (e.g. NameNode) have a cache for EEKs. The following are configurable on the cache:
Note that due to the asynchronous filling mechanism, it is possible that after rollNewVersion(), the caller still gets the old EEKs. In the worst case, the caller may get up to (server-side cache size + client-side cache size) number of old EEKs, or until both caches expire. This behavior is a trade off to avoid locking on the cache, and is acceptable since the old version EEKs can still be used to decrypt.
Below are the configurations and their default values:
Server-side can be changed via the following properties in the etc/hadoop/kms-site.xml configuration file:
<property> <name>hadoop.security.kms.encrypted.key.cache.size</name> <value>500</value> </property> <property> <name>hadoop.security.kms.encrypted.key.cache.low.watermark</name> <value>0.3</value> </property> <property> <name>hadoop.security.kms.encrypted.key.cache.num.fill.threads</name> <value>2</value> </property> <property> <name>hadoop.security.kms.encrypted.key.cache.expiry</name> <value>43200000</value> </property>
Client-side can be changed via the following properties in the etc/hadoop/core-site.xml configuration file:
<property> <name>hadoop.security.kms.client.encrypted.key.cache.size</name> <value>500</value> </property> <property> <name>hadoop.security.kms.client.encrypted.key.cache.low-watermark</name> <value>0.3</value> </property> <property> <name>hadoop.security.kms.client.encrypted.key.cache.num.refill.threads</name> <value>2</value> </property> <property> <name>hadoop.security.kms.client.encrypted.key.cache.expiry</name> <value>43200000</value> </property>
Audit logs are aggregated for API accesses to the GET_KEY_VERSION, GET_CURRENT_KEY, DECRYPT_EEK, GENERATE_EEK operations.
Entries are grouped by the (user,key,operation) combined key for a configurable aggregation interval after which the number of accesses to the specified end-point by the user for a given key is flushed to the audit log.
The Aggregation interval is configured via the property :
<property> <name>hadoop.kms.aggregation.delay.ms</name> <value>10000</value> </property>
To start/stop KMS use KMS’s bin/kms.sh script. For example:
hadoop-2.10.0 $ sbin/kms.sh start
NOTE: Invoking the script without any parameters list all possible parameters (start, stop, run, etc.). The kms.sh script is a wrapper for Tomcat’s catalina.sh script that sets the environment variables and Java System properties required to run KMS.
To configure the embedded Tomcat go to the share/hadoop/kms/tomcat/conf.
KMS pre-configures the HTTP and Admin ports in Tomcat’s server.xml to 16000 and 16001.
Tomcat logs are also preconfigured to go to Hadoop’s logs/ directory.
The following environment variables (which can be set in KMS’s etc/hadoop/kms-env.sh script) can be used to alter those values:
NOTE: You need to restart the KMS for the configuration changes to take effect.
The following environment variable (which can be set in KMS’s etc/hadoop/kms-env.sh script) can be used to specify the location of any required native libraries. For eg. Tomact native Apache Portable Runtime (APR) libraries:
Configure the Kerberos etc/krb5.conf file with the information of your KDC server.
Create a service principal and its keytab for the KMS, it must be an HTTP service principal.
Configure KMS etc/hadoop/kms-site.xml with the correct security values, for example:
<property> <name>hadoop.kms.authentication.type</name> <value>kerberos</value> </property> <property> <name>hadoop.kms.authentication.kerberos.keytab</name> <value>${user.home}/kms.keytab</value> </property> <property> <name>hadoop.kms.authentication.kerberos.principal</name> <value>HTTP/localhost</value> </property> <property> <name>hadoop.kms.authentication.kerberos.name.rules</name> <value>DEFAULT</value> </property>
NOTE: You need to restart the KMS for the configuration changes to take effect.
Each proxyuser must be configured in etc/hadoop/kms-site.xml using the following properties:
<property> <name>hadoop.kms.proxyuser.#USER#.users</name> <value>*</value> </property> <property> <name>hadoop.kms.proxyuser.#USER#.groups</name> <value>*</value> </property> <property> <name>hadoop.kms.proxyuser.#USER#.hosts</name> <value>*</value> </property>
#USER# is the username of the proxyuser to configure.
The users property indicates the users that can be impersonated.
The groups property indicates the groups users being impersonated must belong to.
At least one of the users or groups properties must be defined. If both are specified, then the configured proxyuser will be able to impersonate and user in the users list and any user belonging to one of the groups in the groups list.
The hosts property indicates from which host the proxyuser can make impersonation requests.
If users, groups or hosts has a *, it means there are no restrictions for the proxyuser regarding users, groups or hosts.
To configure KMS to work over HTTPS the following 2 properties must be set in the etc/hadoop/kms_env.sh script (shown with default values):
In the KMS tomcat/conf directory, replace the server.xml file with the provided ssl-server.xml file.
You need to create an SSL certificate for the KMS. As the kms Unix user, using the Java keytool command to create the SSL certificate:
$ keytool -genkey -alias tomcat -keyalg RSA
You will be asked a series of questions in an interactive prompt. It will create the keystore file, which will be named .keystore and located in the kms user home directory.
The password you enter for “keystore password” must match the value of the KMS_SSL_KEYSTORE_PASS environment variable set in the kms-env.sh script in the configuration directory.
The answer to “What is your first and last name?” (i.e. “CN”) must be the hostname of the machine where the KMS will be running.
NOTE: You need to restart the KMS for the configuration changes to take effect.
Set environment variable KMS_SSL_CLIENT_AUTH to change client authentication. The default is false. See clientAuth in Tomcat 6.0 SSL Support.
Set environment variable KMS_SSL_ENABLED_PROTOCOLS to specify a list of enabled SSL protocols. The default list includes TLSv1, TLSv1.1, TLSv1.2, and SSLv2Hello. See sslEnabledProtocols in Tomcat 6.0 SSL Support.
In order to support some old SSL clients, the default encryption ciphers include a few relatively weaker ciphers. Set environment variable KMS_SSL_CIPHERS to override. The value is a comma separated list of ciphers documented in Tomcat Wiki.
KMS supports ACLs (Access Control Lists) for fine-grained permission control.
Two levels of ACLs exist in KMS: KMS ACLs and Key ACLs. KMS ACLs control access at KMS operation level, and precede Key ACLs. In particular, only if permission is granted at KMS ACLs level, shall the permission check against Key ACLs be performed.
The configuration and usage of KMS ACLs and Key ACLs are described in the sections below.
KMS ACLs configuration are defined in the KMS etc/hadoop/kms-acls.xml configuration file. This file is hot-reloaded when it changes.
KMS supports both fine grained access control as well as blacklist for kms operations via a set ACL configuration properties.
A user accessing KMS is first checked for inclusion in the Access Control List for the requested operation and then checked for exclusion in the Black list for the operation before access is granted.
<configuration> <property> <name>hadoop.kms.acl.CREATE</name> <value>*</value> <description> ACL for create-key operations. If the user is not in the GET ACL, the key material is not returned as part of the response. </description> </property> <property> <name>hadoop.kms.blacklist.CREATE</name> <value>hdfs,foo</value> <description> Blacklist for create-key operations. If the user is in the Blacklist, the key material is not returned as part of the response. </description> </property> <property> <name>hadoop.kms.acl.DELETE</name> <value>*</value> <description> ACL for delete-key operations. </description> </property> <property> <name>hadoop.kms.blacklist.DELETE</name> <value>hdfs,foo</value> <description> Blacklist for delete-key operations. </description> </property> <property> <name>hadoop.kms.acl.ROLLOVER</name> <value>*</value> <description> ACL for rollover-key operations. If the user is not in the GET ACL, the key material is not returned as part of the response. </description> </property> <property> <name>hadoop.kms.blacklist.ROLLOVER</name> <value>hdfs,foo</value> <description> Blacklist for rollover-key operations. </description> </property> <property> <name>hadoop.kms.acl.GET</name> <value>*</value> <description> ACL for get-key-version and get-current-key operations. </description> </property> <property> <name>hadoop.kms.blacklist.GET</name> <value>hdfs,foo</value> <description> ACL for get-key-version and get-current-key operations. </description> </property> <property> <name>hadoop.kms.acl.GET_KEYS</name> <value>*</value> <description> ACL for get-keys operation. </description> </property> <property> <name>hadoop.kms.blacklist.GET_KEYS</name> <value>hdfs,foo</value> <description> Blacklist for get-keys operation. </description> </property> <property> <name>hadoop.kms.acl.GET_METADATA</name> <value>*</value> <description> ACL for get-key-metadata and get-keys-metadata operations. </description> </property> <property> <name>hadoop.kms.blacklist.GET_METADATA</name> <value>hdfs,foo</value> <description> Blacklist for get-key-metadata and get-keys-metadata operations. </description> </property> <property> <name>hadoop.kms.acl.SET_KEY_MATERIAL</name> <value>*</value> <description> Complimentary ACL for CREATE and ROLLOVER operation to allow the client to provide the key material when creating or rolling a key. </description> </property> <property> <name>hadoop.kms.blacklist.SET_KEY_MATERIAL</name> <value>hdfs,foo</value> <description> Complimentary Blacklist for CREATE and ROLLOVER operation to allow the client to provide the key material when creating or rolling a key. </description> </property> <property> <name>hadoop.kms.acl.GENERATE_EEK</name> <value>*</value> <description> ACL for generateEncryptedKey CryptoExtension operations </description> </property> <property> <name>hadoop.kms.blacklist.GENERATE_EEK</name> <value>hdfs,foo</value> <description> Blacklist for generateEncryptedKey CryptoExtension operations </description> </property> <property> <name>hadoop.kms.acl.DECRYPT_EEK</name> <value>*</value> <description> ACL for decrypt EncryptedKey CryptoExtension operations </description> </property> <property> <name>hadoop.kms.blacklist.DECRYPT_EEK</name> <value>hdfs,foo</value> <description> Blacklist for decrypt EncryptedKey CryptoExtension operations </description> </property> </configuration>
KMS supports access control for all non-read operations at the Key level. All Key Access operations are classified as :
These can be defined in the KMS etc/hadoop/kms-acls.xml as follows
For all keys for which a key access has not been explicitly configured, It is possible to configure a default key access control for a subset of the operation types.
It is also possible to configure a “whitelist” key ACL for a subset of the operation types. The whitelist key ACL grants access to the key, in addition to the explicit or default per-key ACL. That is, if no per-key ACL is explicitly set, a user will be granted access if they are present in the default per-key ACL or the whitelist key ACL. If a per-key ACL is explicitly set, a user will be granted access if they are present in the per-key ACL or the whitelist key ACL.
If no ACL is configured for a specific key AND no default ACL is configured AND no whitelist key ACL is configured for the requested operation, then access will be DENIED.
NOTE: The default and whitelist key ACL does not support ALL operation qualifier.
<property> <name>key.acl.testKey1.MANAGEMENT</name> <value>*</value> <description> ACL for create-key, deleteKey and rolloverNewVersion operations. </description> </property> <property> <name>key.acl.testKey2.GENERATE_EEK</name> <value>*</value> <description> ACL for generateEncryptedKey operations. </description> </property> <property> <name>key.acl.testKey3.DECRYPT_EEK</name> <value>admink3</value> <description> ACL for decryptEncryptedKey operations. </description> </property> <property> <name>key.acl.testKey4.READ</name> <value>*</value> <description> ACL for getKeyVersion, getKeyVersions, getMetadata, getKeysMetadata, getCurrentKey operations </description> </property> <property> <name>key.acl.testKey5.ALL</name> <value>*</value> <description> ACL for ALL operations. </description> </property> <property> <name>whitelist.key.acl.MANAGEMENT</name> <value>admin1</value> <description> whitelist ACL for MANAGEMENT operations for all keys. </description> </property> <!-- 'testKey3' key ACL is defined. Since a 'whitelist' key is also defined for DECRYPT_EEK, in addition to admink3, admin1 can also perform DECRYPT_EEK operations on 'testKey3' --> <property> <name>whitelist.key.acl.DECRYPT_EEK</name> <value>admin1</value> <description> whitelist ACL for DECRYPT_EEK operations for all keys. </description> </property> <property> <name>default.key.acl.MANAGEMENT</name> <value>user1,user2</value> <description> default ACL for MANAGEMENT operations for all keys that are not explicitly defined. </description> </property> <property> <name>default.key.acl.GENERATE_EEK</name> <value>user1,user2</value> <description> default ACL for GENERATE_EEK operations for all keys that are not explicitly defined. </description> </property> <property> <name>default.key.acl.DECRYPT_EEK</name> <value>user1,user2</value> <description> default ACL for DECRYPT_EEK operations for all keys that are not explicitly defined. </description> </property> <property> <name>default.key.acl.READ</name> <value>user1,user2</value> <description> default ACL for READ operations for all keys that are not explicitly defined. </description> </property>
KMS supports delegation tokens to authenticate to the key providers from processes without Kerberos credentials.
KMS delegation token authentication extends the default Hadoop authentication. Same as Hadoop authentication, KMS delegation tokens must not be fetched or renewed using delegation token authentication. See Hadoop Auth page for more details.
Additionally, KMS delegation token secret manager can be configured with the following properties:
<property> <name>hadoop.kms.authentication.delegation-token.update-interval.sec</name> <value>86400</value> <description> How often the master key is rotated, in seconds. Default value 1 day. </description> </property> <property> <name>hadoop.kms.authentication.delegation-token.max-lifetime.sec</name> <value>604800</value> <description> Maximum lifetime of a delegation token, in seconds. Default value 7 days. </description> </property> <property> <name>hadoop.kms.authentication.delegation-token.renew-interval.sec</name> <value>86400</value> <description> Renewal interval of a delegation token, in seconds. Default value 1 day. </description> </property> <property> <name>hadoop.kms.authentication.delegation-token.removal-scan-interval.sec</name> <value>3600</value> <description> Scan interval to remove expired delegation tokens. </description> </property>
Multiple KMS instances may be used to provide high availability and scalability. Currently there are two approaches to supporting multiple KMS instances: running KMS instances behind a load-balancer/VIP, or using LoadBalancingKMSClientProvider.
In both approaches, KMS instances must be specially configured to work properly as a single logical service, because requests from the same client may be handled by different KMS instances. In particular, Kerberos Principals Configuration, HTTP Authentication Signature and Delegation Tokens require special attention.
Because KMS clients and servers communicate via a REST API over HTTP, Load-balancer or VIP may be used to distribute incoming traffic to achieve scalability and HA. In this mode, clients are unaware of multiple KMS instances at the server-side.
An alternative to running multiple KMS instances behind a load-balancer or VIP, is to use LoadBalancingKMSClientProvider. Using this approach, a KMS client (for example, a HDFS NameNode) is aware of multiple KMS instances, and it sends requests to them in a round-robin fashion. LoadBalancingKMSClientProvider is implicitly used when more than one URI is specified in hadoop.security.key.provider.path.
The following example in core-site.xml configures two KMS instances, kms01.example.com and kms02.example.com. The hostnames are separated by semi-colons, and all KMS instances must run on the same port.
<property> <name>hadoop.security.key.provider.path</name> <value>kms://https@kms01.example.com;kms02.example.com:9600/kms</value> <description> The KeyProvider to use when interacting with encryption keys used when reading and writing to an encryption zone. </description> </property>
If a request to a KMS instance fails, clients retry with the next instance. The request is returned as failure only if all instances fail.
When KMS instances are behind a load-balancer or VIP, clients will use the hostname of the VIP. For Kerberos SPNEGO authentication, the hostname of the URL is used to construct the Kerberos service name of the server, HTTP/#HOSTNAME#. This means that all KMS instances must have a Kerberos service name with the load-balancer or VIP hostname.
In order to be able to access directly a specific KMS instance, the KMS instance must also have Keberos service name with its own hostname. This is required for monitoring and admin purposes.
Both Kerberos service principal credentials (for the load-balancer/VIP hostname and for the actual KMS instance hostname) must be in the keytab file configured for authentication. And the principal name specified in the configuration must be ‘*’. For example:
<property> <name>hadoop.kms.authentication.kerberos.principal</name> <value>*</value> </property>
NOTE: If using HTTPS, the SSL certificate used by the KMS instance must be configured to support multiple hostnames (see Java 7 keytool SAN extension support for details on how to do this).
KMS uses Hadoop Authentication for HTTP authentication. Hadoop Authentication issues a signed HTTP Cookie once the client has authenticated successfully. This HTTP Cookie has an expiration time, after which it will trigger a new authentication sequence. This is done to avoid triggering the authentication on every HTTP request of a client.
A KMS instance must verify the HTTP Cookie signatures signed by other KMS instances. To do this, all KMS instances must share the signing secret. Please see SignerSecretProvider Configuration for detailed description and configuration examples. Note that KMS configurations need to be prefixed with hadoop.kms.authentication, as shown in the example below.
This secret sharing can be done using a Zookeeper service which is configured in KMS with the following properties in the kms-site.xml:
<property> <name>hadoop.kms.authentication.signer.secret.provider</name> <value>zookeeper</value> <description> Indicates how the secret to sign the authentication cookies will be stored. Options are 'random' (default), 'file' and 'zookeeper'. If using a setup with multiple KMS instances, 'zookeeper' should be used. If using file, signature.secret.file should be configured and point to the secret file. </description> </property> <property> <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.path</name> <value>/hadoop-kms/hadoop-auth-signature-secret</value> <description> The Zookeeper ZNode path where the KMS instances will store and retrieve the secret from. All KMS instances that need to coordinate should point to the same path. </description> </property> <property> <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.connection.string</name> <value>#HOSTNAME#:#PORT#,...</value> <description> The Zookeeper connection string, a list of hostnames and port comma separated. </description> </property> <property> <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type</name> <value>sasl</value> <description> The Zookeeper authentication type, 'none' (default) or 'sasl' (Kerberos). </description> </property> <property> <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.keytab</name> <value>/etc/hadoop/conf/kms.keytab</value> <description> The absolute path for the Kerberos keytab with the credentials to connect to Zookeeper. </description> </property> <property> <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.principal</name> <value>kms/#HOSTNAME#</value> <description> The Kerberos service principal used to connect to Zookeeper. </description> </property>
Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation tokens too.
Under HA, A KMS instance must verify the delegation token given by another KMS instance, by checking the shared secret used to sign the delegation token. To do this, all KMS instances must be able to retrieve the shared secret from ZooKeeper.
Please see the examples given in HTTP Authentication section to configure ZooKeeper for secret sharing.
REQUEST:
POST http://HOST:PORT/kms/v1/keys Content-Type: application/json { "name" : "<key-name>", "cipher" : "<cipher>", "length" : <length>, //int "material" : "<material>", //base64 "description" : "<description>" }
RESPONSE:
201 CREATED LOCATION: http://HOST:PORT/kms/v1/key/<key-name> Content-Type: application/json { "name" : "versionName", "material" : "<material>", //base64, not present without GET ACL }
REQUEST:
POST http://HOST:PORT/kms/v1/key/<key-name> Content-Type: application/json { "material" : "<material>", }
RESPONSE:
200 OK Content-Type: application/json { "name" : "versionName", "material" : "<material>", //base64, not present without GET ACL }
REQUEST:
GET http://HOST:PORT/kms/v1/key/<key-name>/_metadata
RESPONSE:
200 OK Content-Type: application/json { "name" : "<key-name>", "cipher" : "<cipher>", "length" : <length>, //int "description" : "<description>", "created" : <millis-epoc>, //long "versions" : <versions> //int }
REQUEST:
GET http://HOST:PORT/kms/v1/key/<key-name>/_currentversion
RESPONSE:
200 OK Content-Type: application/json { "name" : "versionName", "material" : "<material>", //base64 }
REQUEST:
GET http://HOST:PORT/kms/v1/key/<key-name>/_eek?eek_op=generate&num_keys=<number-of-keys-to-generate>
RESPONSE:
200 OK Content-Type: application/json [ { "versionName" : "encryptionVersionName", "iv" : "<iv>", //base64 "encryptedKeyVersion" : { "versionName" : "EEK", "material" : "<material>", //base64 } }, { "versionName" : "encryptionVersionName", "iv" : "<iv>", //base64 "encryptedKeyVersion" : { "versionName" : "EEK", "material" : "<material>", //base64 } }, ... ]
REQUEST:
POST http://HOST:PORT/kms/v1/keyversion/<version-name>/_eek?ee_op=decrypt Content-Type: application/json { "name" : "<key-name>", "iv" : "<iv>", //base64 "material" : "<material>", //base64 }
RESPONSE:
200 OK Content-Type: application/json { "name" : "EK", "material" : "<material>", //base64 }
REQUEST:
GET http://HOST:PORT/kms/v1/keyversion/<version-name>
RESPONSE:
200 OK Content-Type: application/json { "name" : "versionName", "material" : "<material>", //base64 }
REQUEST:
GET http://HOST:PORT/kms/v1/key/<key-name>/_versions
RESPONSE:
200 OK Content-Type: application/json [ { "name" : "versionName", "material" : "<material>", //base64 }, { "name" : "versionName", "material" : "<material>", //base64 }, ... ]
REQUEST:
GET http://HOST:PORT/kms/v1/keys/names
RESPONSE:
200 OK Content-Type: application/json [ "<key-name>", "<key-name>", ... ]
GET http://HOST:PORT/kms/v1/keys/metadata?key=<key-name>&key=<key-name>,...
RESPONSE:
200 OK Content-Type: application/json [ { "name" : "<key-name>", "cipher" : "<cipher>", "length" : <length>, //int "description" : "<description>", "created" : <millis-epoc>, //long "versions" : <versions> //int }, { "name" : "<key-name>", "cipher" : "<cipher>", "length" : <length>, //int "description" : "<description>", "created" : <millis-epoc>, //long "versions" : <versions> //int }, ... ]