“Cloud computing” is something of a misnomer – a term wrongly applied due to a misunderstanding. Its origin comes from people picturing a “far away” connection, usually over a public network like the world wide web, as a “cloud” connecting two different elements (see an example in this architectural diagram).
In reality, modern cloud computing bears a better resemblance to building blocks. Vendors like Amazon Web Services (AWS) or Google Cloud Platform (GCP) may not offer bricks, screws, or wooden planks, but servers, storage, networking, and so on. With cloud computing being around for 15 years, it also covers the essential commodities of the house, like electricity or water. So, imagine a company (small or large) that needs electricity buying and operating a whole power plant. The same principle applies to managing a data centre rather than relying on the public cloud.
Apart from the naming, there are other misunderstandings regarding the cloud. With this article, we aim to make things clearer. First, we pinpoint the workings, fears and concerns around cloud computing; then, we explain why it is the safest and most effective option when implementing technology. Since it is what we know best, we focus on the scenario of hospitals adopting AI-based medical devices.
Cloud computing basics
Cloud or cloud computing describe a series of services offered over an internet connection. The aim is to offload part of the work around a technological device to a provider with more space, expertise, and power. It is commonly referred to as infrastructure as a service, or IaaS for short.
A simple everyday example is iCloud, basically an extension of your iPhone’s physical storage, which uses data from Apple’s servers. From retail to real estate, most industries use cloud computing or are moving towards it, including more risk-sensitive organisations, such as banks or security agencies like the NSA.
Concerns aside
Cloud criticism usually revolves around data safety. Hospitals want to be sure that they have control over the health data in their facility and that no vendor or third party can use it in any way. This is understandable; healthcare providers manage sensitive data and have the responsibility to protect it for their patients. But, as we will explain, this concern is unfounded. We’ll approach the topic from two angles: cybersecurity and vendor lock-in.
Cybersecurity
The public cloud, owned by large corporations, seems like the lesser option compared to legacy, on-premise data centres you can see and touch, giving the feeling of oversight over the data. Often, this fear comes from a misunderstanding. Media coverage of data leaks at Amazon or Google may be accurate, but most incidents are not actually connected to cloud computing services.
In fact, the cloud is more secure. For a small or midsized company like ours, it means access to, for instance, Google’s expertise and resources, with top-of-the-line security tools and practices. And while we don’t know all incidents that happen with private, on-premise data centres, we get to know anything that happens in the public cloud. This creates a misperception of risk. Transparency keeps everyone accountable – and that’s a good thing!
Nonetheless, caution is advisable. To refer back to our building analogy, the best materials in the world can’t hold a poorly-designed house together for long. Cloud vendors need to preserve customer trust, and companies using their services must do so as part of a robust plan.
Vendor ‘lock-in’
Another often-used argument against the cloud is the “lock-in” risk: the possibility of becoming “trapped” inside the ecosystem of a cloud vendor who can raise prices or offer a worse service on a whim.
Sure, unfair practices from large companies are a fair concern, and litigations have happened, such as the recent Epic Games versus Apple’s app store lawsuit. However, we argue that the risk of lock-in is not material when using the public cloud. Being “locked-in” assumes that there is no alternative, no escape from your current vendor. This is false. Like with any building block, you can always move away to another vendor by rewriting pieces of your application. Yes, rewrites are expensive, but so is moving operating systems from Windows to macOS or changing your PACS from Siemens to Sectra.
Infrastructures like the “hybrid cloud” (mixing the private data centre with the public cloud) or “multi-cloud” (using multiple cloud providers) are not the answer. They mitigate the risk of lock-in by superficially using generic cloud services and, consequently, do not realise all their vendor-specific advantages.
In the end, using the cloud is no different to a traditional technical decision, such as buying into HP versus Dell hardware. In fact, the cost of reversing such decisions in the cloud is usually lower.
Arguments forward
We will explain the practical reasons for using cloud services based on our experience as Aidence, deploying AI solutions into hospitals’ IT infrastructures.
Speed and efficiency
There are several possible ways to offer our services to our customers and partners. We could host a physical server in a hospital, use a private data centre, or rent cloud storage.
Bringing a machine to the hospital gives the impression of control. However, there is a severe disadvantage: it hinders scalability. Our service team would have to physically visit the hospital to repair the most minor broken component. Any downtime, such as during upgrades, would take longer, burdening the sites’ IT staff.
A second option for a private data centre means hiring specialised people to manage our machines. It requires using our resources to build something that doesn’t solve any clinical problem for physicians.
Using the cloud, we employ ready-made building blocks offered by vendors, freeing time to focus on our mission of improving the oncology pathway. Thus, we can deliver easy maintenance and frequent upgrades that benefit physicians and their patients right away. For example, we ensured a “smooth and effortless migration” to our new product architecture last year.
In the end, time is the most valuable asset, the only resource that none of us can buy.
Safety and proximity
Werner Vogel, the CTO of Amazon, divides the challenges that the cloud solves using three laws, all of which can translate into safety improvements:
Murphy’s law
“If something can go wrong, it will”. Murphy’s holds especially true when repeating an operation a billion times, like transmitting data over the internet.
Using cloud services, we can guarantee that random events, such as a cable breaking or a blackout, won’t affect the hospital using our software. We can do so because the cloud allows us to have multiple copies of our software running independently; technically, providing “multiple availability zones” is a costly and complex endeavour when done on-premises, but a given in the cloud.
And, since cloud vendors’ reporting is transparent, you don’t have to take our word for it – we can demonstrate performance publicly.
The laws of physics
Consider relativity and thermodynamics: we cannot exceed the speed of light, and we need a way to power and cool down servers efficiently without warming up the planet.
The cloud allows us to run our services close to our customers, reallocate our workloads as close as possible, and make the connection between the PACS and our engine faster. It also allows us to choose the greenest options. We can choose to run on data centres powered mostly or only by green energy using, for example, this overview from Google.
Currently, 60% of the energy used to power Aidence’s workload is green. We are confident that we can go above 90% over the next few years.
The law of the land
The law of the land means respecting the rules and regulations of each country in which we operate.
Cloud vendors have standardised and transparent agreements for data processing and storage. For one, Google cannot access our data. In addition, we can encrypt the data to ensure that it remains secure even in the case of a malicious attack.
As mentioned, we use data centres close to our customers. In compliance with regulatory requirements, the data never leaves the area where processing or storage is allowed (e.g. EU data never leaves the EU block). For more on regulatory frameworks around cybersecurity, I recommend this article by my colleague, Leon.
Nonetheless, security is a concern that we take very seriously. Thus, we use additional cloud services on top of or alongside Google Cloud (e.g. Cloudflare or Hashicorp Vault). We employ those services as a multiplication factor and leverage the expertise of other companies to deliver the highest level of cybersecurity to our customers as part of our offering.
Cost-effectiveness
The key difference between cloud computing and other solutions lies in the cost model. Cloud storage space can be “rented”, based on a monthly or a pay-as-you-go fee. On the other hand, hosting data on-premise requires buying a physical server, which lends itself to less flexibility.
Some think that the cloud is too expensive or just for small businesses. Yet the cloud makes hidden costs visible. Many companies don’t have their internal IT costs under control. They don’t factor into their IT costs the salaries of the employees managing the internal data centre or the time spent hiring more people instead of buying services. That changes when using the cloud.
Cloud computing is particularly cost-effective because of its elasticity. We can easily ask for more resources when there is a peak of work incoming (for example, on a Monday morning) and return those resources when they are not needed. It is a cost-saver and (again) a greener way to manage energy consumption. No other solution makes it possible.
We pass on this benefit to our customers by providing a cost-effective AI solution that is available when needed, irrespective of geography, demand, or day of the week. The cost of deployments, upgrades, fixes, and our services, are also reduced when using the cloud.
Our own example
At Aidence, we provide AI-based solutions for radiology. In doing so, we process medical images from hospitals and apply state-of-the-art technical and organisational measures to protect this data. This means, amongst others, pseudonymising personal information, encrypting data and restricting access to it.
The cloud services we use include technical building blocks like storage for the medical data and computing power to run our deep learning models. We don’t buy and install physical computers, networking and storage ourselves. Instead, we rent services for virtual machines, networks and storage from a cloud services provider. These provide an unlimited, on-demand capacity.
This allows us to activate new services for hospitals as soon as they need it, rather than having to wait for physical devices to be shipped, received, built, tested and installed. A process that used to take several months (four to eight months, depending on hardware requirements) now can be completed in one morning.
The illustration below shows the process of our AI solution Veye Lung Nodules delivering its results to the radiologist’s IT infrastructure:
- The study of the patient is acquired and sent to the PACS.
- Veye Engine automatically retrieves the current and prior study from the PACS, if available.
- The AI algorithms within Veye Lung Nodules analyse the studies.
- Veye Lung Nodules generates the results.
- The results are sent to the PACS as part of the original study.
- The results are ready for the radiologist’s assessment.
A choice like any other
Cloud computing is a safe option that allows us to innovate faster, safer, and keep our costs in check. In the end, it isn’t different from other software choices any company needs to make. Our approach is to reap all the benefits once we make our decision, and we are doing just that with cloud adoption.
For hospitals looking to adopt AI medical solutions, we firmly believe the cloud is a secure, future-proof bet, allowing easy maintenance and prompt upgrades.
Are you looking to start using AI in your radiology team, or would you like to know more about cloud computing and integration? We’re ready to talk.