UK IT professionals are adopting a “Titanic mindset”, according to a study, unable to foresee the impending “iceberg” of their inadequate data recovery solutions.
Only 54% expressed confidence in recovering their data and mitigating downtime in a future disaster, even though 78% of professionals surveyed said their organization lost data at some point during the past year, whether due to a system failure, human error or cyberattack.
Assurestor, a recovery solutions provider, surveyed over 250 senior IT professionals, including IT directors and CTOs, in UK organisations in July 2024. Those who had experienced a data loss were asked about its impact on their organisation, with 35% citing financial loss as the biggest consequence.
The findings corroborate a June report from Splunk showing that the world’s largest companies suffered losses of approximately $9,000 for every minute of system failure or service degradation. Contributing factors included direct revenue loss, decreased shareholder value, stagnant productivity and reputational damage.
SEE: 1/3 of companies suffered a SaaS data breach last year
The other two most cited impacts of data loss in the Assurestor report were implications for customer service (30%) and operational downtime (28%). Chillingly, 16% of respondents said a major data loss event would likely force their business to close.
The proliferation of sensitive data has contributed to the rise in data breaches in businesses. An August report from Perforce found that 74% of those handling sensitive data increased the amount of data stored in unsecured environments, such as development, testing, analytics, and AI/ML, over the past year.
UK IT professionals do not regularly test their data recovery processes
Despite the known and feared risks, IT leaders in the UK do not appear to be taking the necessary steps to mitigate them, which could include data recovery testing. Only 5% test monthly, while 20% test only once a year or less, according to the Assurestor report. Among the most frequent testers, 60% verify that their company’s data is fully recoverable and usable only once every six months.
“What we’re seeing is what we call a ‘Titanic mentality’ when it comes to data recovery,” said Stephen Young, CEO of Assurestor, in a press release. “Organizations believe they are unsinkable — until they are not.”
He cited the CrowdStrike and British Library incidents as examples of how much downtime and the risks of insufficient technology can cost organisations. The former cost Fortune 500 companies at least $5.4 billion in direct financial losses, while “legacy infrastructure contributed to the severity of the impact” of the latter.
SEE: Downtime costs the world's largest companies $400 billion a year, according to Splunk
Young added: “The fact that only just over half of respondents think their data is recoverable is worrying – this figure should be much closer to 100%. Otherwise, how can you confidently inform your board and stakeholders of your ‘recovery readiness’?”
“Confidence comes from identifying a company’s realistic needs, without compromising on costs, and from conducting thorough and repeated testing.”
What is the main reason for lack of data recovery planning? No one else seems to care
Assurestor's report identifies a key reason why businesses fail to prioritize their data recovery plans despite being aware of the risks: lack of internal support.
Executives simply aren’t providing enough resources to their IT teams: 29% of respondents cited a lack of financial investment and 39% said a lack of internal expertise. Another 28% identified a lack of support from senior executives in this area.
“A lack of top-down support in the form of insufficient funding can foster a culture of complacency, even apathy,” Assurestor experts say. “If those tasked with protecting the business in the event of a data breach, attack or human error do not feel that threats are being taken seriously enough (or are not understood) their approach and attitude may reflect this.”
5 Tips to Support Your Data Recovery Process
Assurestor offered several recommendations to help organizations avoid the serious consequences associated with failing to improve their data recovery process:
- Ensure that a recovery environment exists that allows for regular recovery testing but does not disrupt daily operations.
- Hire a recovery manager whose responsibilities include ensuring sufficient data recovery processes and technologies are in place and reporting on the company's recoverability status.
- Redefine how businesses think about “disaster” to include cyberattacks and ensure a backup plan is prioritized.
- Test data recovery plans and backup technologies monthly or as regularly as possible, and adapt them appropriately afterwards.
- Calculate how much downtime would cost your business and how much you can afford, then make sure your recovery plan offers enough protection.
“Absolute system reliability and data recovery are non-negotiable,” Young insisted. “If there is even a hint of doubt, it is an open door to challenges. This uncertainty needs to be identified and addressed before disaster strikes.”