When the pandemic struck, companies were forced to go the cloud, with many having to transition quickly.
Organisations are now relying on remote operations, teleworking and remote access infrastructure. As a result, DDoS actors found new opportunities and began targeting the backend of the technology infrastructure of organisations.
Globally, cloud adoption has been spurred by the undertaking of transformation projects to overcome operational challenges. However, the last 24 months have seen a surge in ransomware and distributed denial-of-service (DDoS) attacks, which are expected to continue through the rest of this decade.
Radware research showed that in the first quarter of 2021, volume of ransom DDOS attacks increased by 31%, with attacks mitigated in the cloud representing over 92% of the total volume and almost 84% of packets. Similarly, cyber researchers from Check Point reported that global ransomware incidents doubled in the first six months of this year compared to 2020.
One of the key challenges facing organisations in a hybrid work environment is the intensity of cyberattacks rather than the exposure to new vulnerabilities
In Singapore, the Cyber Security Agency reported that ransomware attacks more than doubled in 2020 from the previous year. As an example, last August, staff from an F&B business in Singapore discovered that their company servers and devices – including those in the cloud – were infected with NetWalker, a prevalent ransomware strain. Because the company had also stored its backups on the affected servers, none of its data could be recovered.
With breaches such as the above on the rise, an area of investment we can expect to see is in building resilience to destructive type attacks. Many organisations will continue to look at to the cloud to help achieve resilience to these types of attacks. But what is it about the cloud and cloud-native architectures that lend themselves to resilience to these types of attacks?
Three attributes come to mind: distributed, immutable, and ephemeral.
Distributed – Applications and Services: If your applications are leveraging a distributed delivery model — for example, leveraging cloud-based services such as content delivery networks (CDNs) — then you have to worry less about DDoS attacks, as these attacks work best by concentrating their firepower in one direction.
Immutable – Data Sets: And if your applications are leveraging solutions that do not modify records, but rather are “append-on-write” (in other words, your data set is immutable), then you have to worry less about attacks on the integrity of that data, as it is easier to detect and surface such attacks.
Ephemeral – Workloads: And finally, if your applications are ephemeral in nature, then you may worry less about attackers establishing persistence and moving laterally. And the value of confidential information, such as tokens associated with that application instance, is reduced as those assets simply get decommissioned and new ones get instantiated within a relatively short time frame.
Therefore, by leveraging modern cloud-native architectures that are distributed, immutable, and ephemeral, you help address the issues of confidentiality, integrity, and availability which have been the foundational triad of cybersecurity.
Pets vs. Cattle
This brings us to a concept that has been talked about for some time in the context of the cloud — pets versus cattle. Pets have cute names and they can be recognized individually. If a pet falls ill, the owner takes it to the vet. Owners give them a lifetime of caring and make sure the pet lives a healthy life for as long as possible.
Traditional applications are like pets. Each instance is unique. If the application gets infected, it is taken to the cyber vet. “Patch in place” is common with traditional applications that make these instances unique. The job of IT is to keep the applications up and running for as long as possible.
Cattle, on the other hand, don’t have names. They have an obscure number, you generally cannot distinguish the cattle in the herd, and you don’t build relationships with them. If cattle fall ill or get infected, you cull the herd. Modern cloud applications are like cattle. You create many running instances of the services, and each instance is indistinguishable from the other.
They are all manifested from a golden repository. You never patch in place — that is, never make the instances bespoke. Your job is to make the instances ephemeral, killing the instances quickly and create new ones. In doing so, you build resilient systems, which in many ways is the opposite of keeping applications up for as long as possible — these latter systems tend to be more fragile.
If cattle fall ill or get infected, you cull the herd. Modern cloud applications are like cattle. You create many running instances of the services, and each instance is indistinguishable from the other.
-Simon Lee, Vice President, Asia Pacific and Japan, Gigamon
“Chaos Engineering” as a service
The cloud offers many tools to help build systems that follow this paradigm. For example, Amazon recently announced “Chaos Engineering” as-a-service, which allows organizations to introduce elements of chaos into their production workloads, such as taking down running instances, to ensure that the overall performance isn’t impacted and the workloads over time become resilient in the face of these types of operational setbacks.
Getting to this point is a journey, and it may be accomplished in multiple steps. For example, organisations may move their “pets” — their traditional applications and workloads — from an on-premises world to the cloud world, without significantly altering the architecture of the applications.
The common term for this is “lift and shift.” Once the applications are in the cloud and organisations have started building familiarity with cloud-native tools, they can work on re-architecting their traditional applications (pets), into modern architectures that are distributed, immutable, and ephemeral (cattle).
In other words, they can move from pets-in-the-cloud to cattle-in-the-cloud. However, organisations need to make sure that once they get to this point, they don’t regress and move back to pet creation. For example, they don’t patch in place or keep instances up and running for longer than necessary.
Maintaining real-time or near real-time visibility at each step of the journey is critical to ensuring early detection of pets or pet-like behaviour. As new workloads are moved to the cloud in a lift-and-shift model, or as workloads are re-architected into modern microservice type architectures, understanding the internal and external dependencies — that is, the interactions between users and the applications, and between the different application components themselves — is important to enforce the right policies and to detect and disincentivise pet creation. And while there are many ways to do this, looking to the network activity footprint of these applications provides a ground-zero approach to mapping this out.
Ultimately the move towards cloud outcomes can drastically improve the efficiency of your IT systems. When your cloud resilience is done right, businesses can do new things not only faster, but you are able to continually sustain your operations no many what disruption comes your way.
Simon Lee is Vice President, Asia Pacific and Japan at Gigamon