There’s a line that’s going around that says “5 years of digital transformation in 1 year,” and there is so much truth to this. A MIT Technology Review Insights report reveal that more than three quarters of APAC organisations are stepping up their digital transformation and investing in digital and IT solutions.
The Singapore government has also added a slew of initiatives to help local businesses in their digitisation journey – this includes the Chief Technology Officer (CTO)-as-a-Service scheme that will include a one-stop, self-help tool for SMEs)to assess their digital needs and gaps.
Rapid digitisation, however, does bring its set of challenges. It could potentially set companies into a digital tailspin where organisations, especially their IT teams, struggle with system complexities, disparate systems and sources of data, as well as a ‘data tsunami’, and not knowing how to deal with them.
It is worth noting that procuring and deploying digital infrastructure and services are just the first two steps of a successful digitisation journey. Companies’ exciting journey in the Data Age or the Fourth Industrial Revolution has just begun- and what sets businesses and their competitors apart is how they reap the full benefits of data, which is one of the key outputs from their cloud and technology investments.
So Many Systems, So Little Time (and Eyes!)
Companies digitising and automating various aspects of their operations can be an overwhelming exercise to IT leaders, managers and their team. Deploying software and solutions is just a part of the battle as a lot can go wrong with increasingly complex, hybrid IT systems, and companies run the risk of having disparate or siloed systems.
According to Splunk’s State of Observability Report, 68% of organisations deploy cloud-native apps in a combined environment of public cloud and private data centers or edge locations. On average, respondents say that their cloud-native apps extend across 2.25 public cloud environments. This does not include the other networks and IT solutions deployed either in their on-premise or public cloud, which adds to the long list of properties to be monitored.
The struggle is real for IT teams, especially those in this region. APAC organisations were the most likely to report needing improvement in their visibility of on-premises infrastructure compared to their peers in North America and Europe, according to the same study.
APAC organisations were also much more likely to report slow or non-delivery of alerts, which is extremely risky in a complex IT setup. Being able to understand and remediate problems in this type of setup is essential so that operations are not disrupted, and customer experience is not compromised.
Where observability can help
This is where observability tools come in handy. They have the ability to accelerate problem detection and resolution, increased hybrid visibility, tightened alignment between development, security and operations teams, and accelerate app development and deployment.
In fact, 70% of mature observability tools users say they have “excellent” visibility into their organisation’s security posture, compared to only 31% of companies who have just adopted these tools. This enables them to assess and address potential risks before they turn into breaches, cyber threats and network or service outages, which are crises that companies can no longer afford as they risk losing credibility, customers and revenue.
Having greater access and visibility on performance and productivity data produced from various sources also gives organisations a competitive edge. These insights can be turned into actions and inspiration for new products and services that meet today’s customers’ demands.
The State of Observability Study reported that mature observability tools users innovate at a faster rate: 45% report launching eight or more new products or revenue streams in the last year, compared with 15% of those who have just started.
From observing to observability
As organisations take on the increasingly competitive marketplace, observability solutions are becoming a differentiating factor in their arsenal of tools and strategies. As it takes time to have a strong observability practice in place, organisations need to start looking at their IT roadmap and see how observability fits in.
IT teams also need to start breaking down silos, and start gaining visibility into all their data across various sources. They need to prioritise data collection and correlation, and ensure that the business is able to work with every metric, log and trace produced by the organisation.
This is crucial for the next step - analysing and drawing conclusions that can be turned into actions. Beyond managing data, being able to understand, and subsequently resolve issues in real-time is essential to customer satisfaction and the bottom line.
Lastly, organisations also need to consider which observability tool is right for their company. Organisations need to evaluate based on solutions that offer the best visibility across workstreams and are interoperable, so that time can be spent on innovating and addressing customer needs instead of managing tools.
They need to consider moving away from reliance on tools provided by cloud service providers, which can be limiting and may not be cloud-agnostic. This may cripple the IT team in the long run.
Leaders and decision makers also run the risk of favouring solutions that offer simplicity, in the name of efficiency, over functionality.
Observability tools are meant to help IT teams navigate the complexities across networks or environments (on premise and public cloud), cyber security and DevOps solutions- simplifying data without losing crucial insights in order to take effective actions. It should also help them anticipate data influx, make full use of data as well as detecting and addressing issues before they occur. These would certainly help companies ace the digitisation game.
Dhiraj Goklani is Area Vice President of ITOA & Observability Sales, APAC, Splunk