Saturday, 1 February, 2025

Subscribe to Newsletter

HOME
  NEWS
  Knowledge Center
 
News

Big Data - Break Free from Legacy, Data Management Expert Advises Middle East Companies

Published May 20, 2012

There is no denying that businesses in the Middle East are under immense pressure to manage massive amounts of complex data. Information levels are estimated to be growing at up to 80 percent year on year and the biggest challenge associated with that comes from the dramatic increase in managing unstructured data from emerging sources – desktops/laptops, audio/visual files, images, databases, social media and a variety of other data types that are prominent in an organisation, but frequently managed in ‘silos’.

This unrelenting growth is a major force driving the ‘Big Data’ debate, which is further compounded by the universal adoption of virtualisation, the rapid shift to cloud-enabled services, the influx of mobile computing devices, demand for 24x7 operations and increasing consolidation.

Data management expert Simon Gregory, Business Development Director at CommVault Systems says that whilst Big Data brings with it a lot of good regarding new ways to create information that offers real business value it also presents a new set of challenges for the IT department as organisations struggle to find ways in which to keep pace with more demanding service levels for recovery and collapsing backup windows - which often leads to overloaded networks and a tendency to turn to more costly alternatives. A fundamental issue here appears to simply be that there just isn't enough time, resources or budget to manage, protect, index and retain massive amounts of unstructured data. The negative side effects of Big Data, which include risk, complexity and cost, clearly need to be met head on if the positive benefits are to win out.

Legacy solutions are not ‘fit for purpose’
Unfortunately, legacy data management methods and tools simply aren't up to the task of managing or controlling the data explosion. Originally created to solve individual challenges, which has since led to multiple products being deployed to manage backup, archive and analytics and resulted in complex administration, information silos have now been created causing upgrade concerns and bringing forward the debate around the cost of alternatives versus current maintenance issues. Lack of reporting across these platforms ultimately reduces data visibility across an organisation and impacts the ability to introduce effective archiving strategies.

Traditional solutions also have two stages for each protection operation – scan and collection. In order to perform backup, archive and file analytic operations, each product must scan and collect files or information from the file system. Synthetic full, de-duplication and VTL solutions may have been introduced to try to reduce repository problems but a lack of integration capabilities causes these solutions to fall short in the longer term. Typically, incremental scan times on large file systems can also require more time than actual data collection. Regularly scheduled, full protection, operations then exceed back up windows and require heavy network and server resources to manage the process. It’s a vicious circle.

Convergence is the way forward
There is an alternative approach, which is to adopt a unified data management strategy which collapses data collection operations into a single solution enabling the copying, indexing and storage of data in an intelligent, virtual repository that provides an efficient and scalable foundation for e-Discovery, data mining, and retention. Such an approach also enables data analytics and reporting to be performed from the index in order to help classify data and implement archive policies for data tiering to lower cost media. This also serves to reduce the total cost of ownership.

The advantages here are immediately clear. Built-in intelligent data collection classification will help to reduce scan times, which in turn allows companies to maintain incremental backup windows. Improved single pass and data collection for backup, archive and reporting also helps to reduce server load and operations. Integration, source-side de-duplication and synthetic full back up then further reduces and the network load whilst a single index instantly decreases the silos of information.

Instead of moving the pain point, a converged solution, will create a single process that has the potential to reduce the combined time typically required to backup, archive and report by more than 50 percent compared to traditional methods and will deliver the simplified management tools required to affordably protect, manage and access data on systems that have become ‘too big’.

Whilst there are many ways to create Big Data, organisations that want to take control of the data mountain would be advised to consider adopting a ‘Copy Once Re-use Extensively’ (CORE) strategy if they want to manage Big Data cost effectively in the long term. The key benefits to CORE are simple:
• Process data once
• Store data once
• Retain data once
• Search data from one place
• Centralise policy management
• Automate tiering of data while maintaining hardware and storage flexibility
• Synchronise data deletion and automate space reclamation

There is no doubt that many organisations are having to walk a fine line between over-collection of data, which brings companies higher review costs, and under-collection, which presents them with the risk of missing key information, perhaps located in one of the emerging data sources - a critical issue in today’s world of information-on-demand, regulation and compliance.

The overall idea that all data sources, even those at the "edge” of the network could be accounted for – without adding to the data mountain - is a major reason to move to converged backup, archive and protection. Easing e-Discovery burdens was cited as the number one pressure point in the Forrester Research, Inc. ‘Global Message Archiving Online Survey’, above lowering storage costs and boosting application performance. I believe that convergence is absolutely the best way to take the pain out of finding key information in the ‘Big Data’ haystack.

What companies should be focused on achieving is the use of one platform that will enable those working with the information to intelligently manage and protect enormous amounts of data across a number of applications, hypervisors, operating systems and infrastructure from a single console. A policy-driven approach to protecting, storing and recovering vast amounts of data whilst automating administration will always be the best way to maximise IT productivity and reduce overall support costs. Eliminating manual processes and seamlessly tiering data to physical, virtual and cloud storage helps to decrease administration costs whilst increasing operational efficiencies - enabling IT departments to ‘do more, with less’ resources.

A single data store would empower businesses to streamline data preservation and eliminate data redundancy during the review process which is now considered to be one of the major causes of skyrocketing data management costs. The ability to more easily navigate, search and mine data could fundamentally mean that Big Data is finally viewed as an asset to the business, not a hindrance.



Rate This:

Posted by VMD - [Virtual Marketing Department]


Poll
What is your favourite search engine?
Google
Yahoo
Bing

Most Viewed
  Riverbed Launches Industry’s Most Complete Digital Experience Management Solution

  Credence Security to Address Growing Market for GRC Solutions in Middle East Through Partnership with Rsam

  New Mimecast Archive Cloud Capability Streamlines GDPR Management for Email

  Planning and Scheduling Software–Helping Manufacturers Keep Their Customers Happy

  Farsight Security and Infoblox Provide Zero-Hour Protection Against Cyberattacks Due to New Domains

  Fujitsu Launches High-Security Biometric Authentication Solution for Active Directory IT Environments

  Rackspace Wins 2017 Red Hat Innovator of the Year Award

  ServiceNow Survey Shows 2018 as the Year of Automation for Routine Enterprise Work

  4 Tech Hacks to Faster Customer Onboarding

  New Mimecast Report Detects 400% Increase in Impersonation Attacks