Lets say that we have a main database which has unfortunately has gone down, how can we prevent this? We can make replica of the main database, sort of as a standbyof the main database. So even though all the read and writes are happening on the main database, for every write operation the replica is updated in a synchronous way. The idea is the replica would take over when the main database goes down. And when the main database comes back up again, the replica can update the main database can swap roles between them and replica can go back to being standby.
The most important thing is the replica is always synchronously updated whenever there is write operation on the main database. And in anycase if write operation fails on the replica, the write operation on should not complete on the main database. The will improve the availability of the database in your system.
Another interesting usecase of replication is when the system ( say Linkedin ) has lots of users from two different countries - US and India, these two countries are in two differnt geographical zones. So when one of users posts and posts in Linkedin, the followers from both these countries should be able to see them, and assuming the user posting the post is from US region, then users can US region can view this post with very low latency, but the users in India may take some time. To avoid this situation we can use database replication wherein there are two databases, one in the US ( main database) and a second one in India. The database in India is asynchronously ( say every 5 minutes or 10 minutes ) synced with the main database in the US, that way users from India will notice a very low latency. This only works for situations when asychronous sync in accepted for the that part of the system, in the case of linkedin posts, a asych sync might work, but let say stock broker application we need maybe a synchronous updation.
To solve the problem we can split up the data based on a hashing, one that promises uniformity to determine which shard the data would be written to and read from. You can cannot change the hashing function here because the hashing determine which shard the data would be written to and changing the hashing function would change that as well. Here Consistent Hashing, when adding another shard, this would minimize the number of peieces of data that we have to migrate from other shards to the new shard, but if one of the existing database shards goes down then consistent hashing would not help. In that we need replica of the shard to get the database back up. In the end, its up to designer to find means to split up the data in different ways, put some thought to it.
client -> server -> (reverse_proxy) ->shards. The logic to select the different can be stored in the reverse_proxy server. So the reverse proxy can recieve the request from the server, which handles request from the client. There could be some if-else logic in the reverse_proxy server which select the appropriate shard database.