Backing Up ElasticSearch
As with any software that stores data it is important o take back up of that data. Elastic search is a data store with exceptionally good capabilities of searching.
In elastic search data is stored in indexes. So Either you can take back up of whole cluster or you can take back up of indexes you want. Elastic search provides a great api to take back up.
It’s called snapshot api.
As we know elastic search is a distributed data store, So it’s whole data is distributed among all the nodes. Which can be on on different different locations.
So if elastic search takes back up of itself then there must be some place which is shared among all the nodes of elastic search.
There are different different ways one can make a shared locations.
Some examples are amazon s3, Network File System(NFS) etc.
At first you have to create a repository where this back up will be stored.
PUT _snapshot/my_backup
{
“type”: “fs”,
“settings”: {
“location”: “/mount/backups/my_backup”
}
}
Here /mount/backups is the shared location. my_backup is the name of our repository.
Now let’s start taking back up.
1. PUT _snapshot/my_backup/snapshot_1
This command will take backup of all the open indexes and store it in my_backup repository.
2. PUT _snapshot/my_backup/snapshot_2
{
“indices”: “index_1,index_2”
}
This Api call will only take backup of sepcified indexes.
Backups in elasticsearch are incremental backups. Every time you take a back up It will use previous backups to continue with this back up. So If you back your cluster five times size wont be five times.