Cloud adoption has been arguably one of the most common strategies that organisations have taken to support remote work and undertake digital transformation.
However, there are still incidences of many who have left open blindspots to losses and breaches by not ensuring the security of their data or resiliency of their databases. While the cloud vendors can help, ultimately the responsibilities to safeguard the data remain yours.
Dave Page, Vice President and Chief Architect, Database Infrastructure at EnterpriseDB (EDB) provides advice to iTNews Asia on steps and recommendations IT departments can take to better protect their companies’ data assets when shifting to the cloud.
iTNews Asia: There are security risks associated with storing data on-premise versus on the cloud. How can enterprises in Asia ensure that they can take advantage of the benefits of the cloud without compromising data security?
Data in the cloud can arguably be more secure than if stored in your own data centre. The major cloud providers take physical security very seriously - often undergoing comprehensive audits such as system and organisation controls (SOC) 1, 2 and 3 which is not always the case with private data centres.
However, whilst you can be assured of the physical security of your data in the cloud (having of course, reviewed your chosen providers’ audit reports), you still need to ensure the non-physical security of the systems and services in use.
Whilst there are parallels to running your own network and data center, cloud providers offer a myriad of options for managing the security of the services you choose to run. It is essential that organisations ensure their staff are aware of the proper use and configuration of these services, and to not rely on prior knowledge of traditional infrastructure.
For example, a traditional deployment might employ a private network, DMZ (or demilitarised zone), and public network for segregating services with firewall rules to ensure that only permitted traffic flows between those networks. Meanwhile in a cloud environment, you might make use of multiple subnets within a virtual private cloud (VPC).
Use of services such as elastic IP addresses, load balancers, security groups for hosts, security groups for VPCs, and more may affect how traffic is allowed or prevented from reaching servers hosting data. It is critical to understand how these services work and how they should be configured to implement "the principle of least privilege"; that is, only allowing users access to data and systems that are required, and nothing more.
Whilst you can be assured of the physical security of your data in the cloud having of course, reviewed your chosen providers’ audit reports, you still need to ensure the non-physical security of the systems and services in use.
-Dave Page, Vice President and Chief Architect, Database Infrastructure at EnterpriseDB (EDB)
iTNews Asia: What are some of the security challenges that organisations typically face when migrating data from on-premise to cloud databases? What are some blind spots that IT teams in Asia might encounter during such a migration?
The biggest challenge in my experience is ensuring that staff are properly trained. It's very easy to start small with cloud services and see organic growth over time to significant amounts of usage. Often, users self-train. They start with small projects or development work because the cloud makes it easy, but the nature of what they're doing doesn't necessarily require them to pay a great deal of attention to security. Then when larger projects begin, which will include hosting critical data or infrastructure in the cloud, the security aspects end up becoming a secondary concern or are simply not properly considered or configured due to the lack of proper training.
That's not to say that every self-taught user will be lax on security of course, but it is essential to ensure that all users have appropriate training to ensure they know how to configure the services they use in a secure manner, and that they keep security considerations foremost when designing their deployments.
iTNews Asia: How difficult is it to connect old data with the cloud? What have organisations done to ensure data security in the cloud, and what are the potential concerns and key issues they should consider?
Migrating old data to the cloud can range from easy to difficult depending on how different the source and target databases are. If they are the same, for example from one PostgreSQL database to another, then various approaches may be employed, such as simply dumping the database from the old database and restoring to the new.
Another approach could be setting up replication from the old database to the new using Postgres' built-in logical replication for little to no downtime. Connectivity can be secured through the use of security groups on the cloud database (to ensure only appropriate hosts can connect), but it is far better to use a virtual private network (VPN). You can create one or more tunnels to extend your private network into the virtual private cloud.
Connecting old data to the cloud is a slightly different topic, which implies that the old data will stay on-premise, but needs to be accessed together with the data in the cloud. In the case of Postgres databases, this can be achieved by using a VPN to extend the private network into the virtual private cloud, and then architecting applications such that they utilise both databases.
However, it may be more appropriate to utilise Postgres' Foreign Data Wrappers (FDW). An FDW is essentially a data provider for Postgres that exposes an external data source as a table within a database. This allows you to run an SQL query on one Postgres server that spans data that is stored locally as well as data that is stored on a remote database server.
FDWs for PostgreSQL are available for many data sources such as MySQL, Oracle, Redis, MongoDB, CSV files, and naturally, PostgreSQL itself. This capability allows you to truly federate your data, whether it's stored in a different location such as the cloud, or in a different format or database management system.
iTNews Asia: How essential and what are the benefits of using open source databases?
There are many good arguments for using an open source database such as PostgreSQL. Firstly, the code is almost certainly subjected to far greater scrutiny than closed source products because anyone can inspect it. This increases (but does not guarantee) the chances that the code is well written and secure.
Secondly, there is less of a "lock in" - you're not dependent on any one company to support your databases.
The PostgreSQL open source project is a truly independent project, with numerous companies around the world working on features and providing support and services; all of which have access to the source code and have the potential to support it and help users as well as any other company. With proprietary databases, typically only the vendor has any access to the source code, which means that only they can fix bugs or truly understand what's happening "under the hood".
Finally, I would argue that using an open source database offers technical advantages that closed source databases do not. For example, if you need an FDW to access data in an in-house developed system, it's relatively straightforward for a developer to write one.
If you need some specialised functionality such as a special datatype, you could extend Postgres as needed. With an open source database there is no limit to what you can do, whether by building or using extensions such as FDWs, or even rewriting parts of the database server to meet your specific needs.