When it comes to data recovery the current best practice employed by any organization is based on two key factors; time and selectivity. In short, IT departments respond to a disaster recovery situation as swiftly as they can to restore the most business-critical operational data they can and, at present, this does not include big data.
Why is big data ignored?
Viewed by many as not an essential component of operational processes, big data is overlooked in disaster recovery plans essentially as being just too big. The data volumes on which big data is stored are often several factors of ten larger than their mission critical equivalents and many organizations contend that to back these files up would monopolize their I/O channels.
However, businesses are beginning to realize the importance of big data recovery Singapore firms specifically are turning to disaster recovery centres that can cope with their volume of big data.
Big data isn’t essential
Whilst many companies do not view big data as important there are organizations that rely on these large snapshots to inform and support critical business decisions and, as more businesses employ big data analysts to improve their product turnaround and customer services, the importance of big data is set to increase. In fact, big data analysts (or data scientists) are being seen as one of the top career moves in IT for 2016.
What are the options?
Fortunately the traditional view of backing up big data is flawed and there are plenty of ways that big data can be successfully backed up without negatively impacting on the process of other disaster recovery operations.
Firstly, big data is often an historical set of data which remains, largely, static. Though, it can represent a significant percentage of your volume storage backing it up is a one off process for each snapshot.
Backing up big data can take several forms including data replication, keeping local and remote copies of the database/disk drives or virtual snapshots – a hardware solution that suspends the write operation whist a virtual backup is taken of the entire system.This isn’t like iPhone data recovery, the process could literally take days….or could it?
So, whilst there are feasible ways to ensure that big data is backed up the issue of time still remains. Can big data be restored within the recovery time objective (RTO)?
Automation
The process for disaster recovery is now largely automated in order to minimize as much human intervention as possible. This is, of course, to reduce the amount of time taken to restore an operational system but with big data the recovery time can take longer. Ensuring that big data applications use smart DBA methods can reduce this time but it is essential that the automation process is practiced, and practiced regularly.
Summary
Big data is becoming a far more important element of business operations and ensuring that these large volumes are incorporated into data recovery planning procedures is essential.