There are similarities (between OneData and iRODS). So basically iRODS uses a PostGres database for metadata storage to complement the data storage. There's a rule engine to do things like manage numbers of replicas. A iRODS deployment is typically institution-level. There's an authentication layer that controls access to things. OneData is design from the ground-up for data distribution on cross-institutional scale. Metadata is stored in CouchBase, an eventually-consistent distributed database. This eventually-consistent thing carries across to OneData as a whole: there's no locking, for instance.
The components of OneData are, as Bruce says, a OneProvider that provides actual storage (on Lustre, Ceph, S3, Swift, etc). Multiple OneProviders can form part of a Space - which is basically your filesystem. Then the OneZone links to various authentication providers and is basically the top level interface: you log in to OneZone, create a space, add providers to the space. There is a rule interface that you can write against but it is emphasised less than iRODS' rule engine.
The design of OneData is very much clustered / shared nothing.
So the thing that is replicated across a space by default is the metadata view - both filesystem metadata and then any extended metadata you chose to add. Then data is - by default - replicated on read. Remember, no locking, so if two remote users hit the same file blocks at the same time, its a problem. In terms of resources, the CouchBase install is IO heavy and is apparently best backed by SSD. Then for replication, they're using their own protocol - apparently fast but not as fast as GridFTP.