I know it’s not totally relevant but I once convinced a company to run their log aggregators with 75 servers and 15 disks in raid0 each.
We relied on the app layer to make sure there was at least 3 copies of the data and if a node’s array shat the bed the rest of the cluster would heal and replicate what was lost. Once the DC people swapped the disk we had automation to rebuild the disks and add the host back into the cluster.
It was glorious - 75 servers each splitting the read/write operations 1/75th and then each server splitting that further between 15 disks. Each query had the potential to have ~1100 disks respond in concert, each with a tiny slice of the data you asked for. It was SO fast.
I know it’s not totally relevant but I once convinced a company to run their log aggregators with 75 servers and 15 disks in raid0 each.
We relied on the app layer to make sure there was at least 3 copies of the data and if a node’s array shat the bed the rest of the cluster would heal and replicate what was lost. Once the DC people swapped the disk we had automation to rebuild the disks and add the host back into the cluster.
It was glorious - 75 servers each splitting the read/write operations 1/75th and then each server splitting that further between 15 disks. Each query had the potential to have ~1100 disks respond in concert, each with a tiny slice of the data you asked for. It was SO fast.