DNS round-robin typically ties a client host to a single node, since the IP resolution is cached in the OS. This provides neither load balancing nor redundancy. We do not recommend using DNS round-robin to distribute requests among ECS nodes. It is possible, but in most situations it provides no real benefit. Instead, please use a real load balancer or client-side load balancing (as in our Java object client).
We also do not recommend using virtual-hosted buckets. Instead, please use path-style bucket requests. This access pattern is more flexible and easier to maintain.
That said, if you still wish to proceed with this configuration, there is a guide you can follow to configure DNS for vhost buckets. You should be able to add a round-robin pool to both DNS entries.
I should probably qualify my comment about vhost buckets. What I meant was that you should always prefer path-style buckets whenever possible. That's not to say vhost buckets are bad or won't work. There are even a few cases where vhost is necessary (i.e. public objects require a namespace baseURL). However, when given the choice, path-style buckets are more flexible and generally cause less confusion.
Please be aware that the choice of path-style vs vhost-style requests is made by the application developer (client). When configuring an ECS cluster, if you wish to support a wide breadth of applications (suppose you are an ISP), you should provide the option for vhost buckets (and the resulting DNS/SSL configuration).
In DNS round-robin, the next IP address in the pool is returned for each DNS query of the host name. At that point, the client directly connects to the node IP. However, due to caching in the OS layer, clients will only query for the IP once and will end up pinned to a specific node. There are no health checks to determine when a node is unavailable and if a node goes down, any applications pinned to that node will fail. Because of the pinning, you will also end up with hot-nodes (nodes hit by demanding applications), which increase the odds of failure and (in rare situations) unbalanced data placement.
With an external (proxying) load balancer, all traffic is routed through the load balancer (so it must be able to support the anticipated load). Node selection is done for each request (each client will have its load distributed) and the load balancer will perform health checks and remove a node from the pool if it becomes unreachable. Thus, applications will continue to function even if a node goes down and the load (and data placement) is evenly distributed across the pool.
It could have been good if an architecture of load balancing on Isilon is provided on ECS too, without need for external load balancer, with one single master IP