At the end of the day, for those of us on DevSecOps teams, it is all about managing risk, even in the highly regulated healthcare industry. Compliance around medical records and privacy concerns is a driver, so development and security professionals must take aggressive steps to prioritize risk management as the healthcare industry continues to be a frequent target of bad actors. According to Gartner, the worldwide end-user spending on public cloud services is forecasted to grow 18.4% in 2021 to a total of $304.9 billion, up from $275.5 billion in 2020. “The pandemic validated the cloud’s value proposition,” Gartner Research Vice President Sid Nag said.
The monetary loss from cybercrime goes beyond just affecting healthcare with an estimated $945 billion cost in 2020, according to McAfee. For those working in the healthcare industry, realize that a 2020 breach analysis report by IBM and Ponemon Institute found that healthcare breaches were the costliest. In other words, not managing risk is expensive.
Gartner also reported COVID-19 forced organizations to preserve cash and optimize IT costs, support and secure a remote workforce, and ensure resiliency. And the cloud became a convenient means to address all three. If this scenario sounds familiar to your organization, the following are four insights to consider that will help to protect data in the cloud.
1. Healthcare organizations must mature their security posture
There are few industries outside of the healthcare sector where the data held is so critically personal that, if lost, the implications could last a lifetime. From medical records to insurance information to financial accounts and Social Security numbers (SSNs), healthcare organizations keep a lot of personally identifiable information (PII). If that precious data is accessed and SSNs are stolen, for example, that breach could have a long-lasting, lifetime effect for your patients as that PII never changes. Beyond the regulatory concerns, this could also translate into long-term reputational damage for a healthcare organization.
To mature an organization’s data collection and security posture, there are two things to keep in mind: Only store and manage data relevant to the business, and ensure the relevant data is stored securely.
First, let’s look at data storage. Before your organization even collects data, stores it or manages it, it’s vitally important to understand what the data is. By adopting a concept of minimum necessary data collection, organizations can filter out unnecessary information capture by asking a simple question: Do we even need this data? Then, conduct an analysis to understand whether the data is sensitive and if storing and managing the data is actually needed to maintain business functionality.
Now, ensuring security. A significant number of breaches in the recent past have happened because of simple misconfigurations of data storage. Organizations must understand what data is collected and how it needs to be stored, and automation security scanning processes must be in place to regularly review the attack surface. This automation safeguards against inadvertent configuration changes that leave data exposed. Security scans could include the implementation of automated tools to ensure that external attack surfaces are not easily reachable by script kiddies who are also running similar tools on the internet. Make it more difficult on them to perform manual penetration tests to identify exploitable areas.
2. Threat modeling is critical to identifying threats
To take these threat identification initiatives to the next level, organizations should start looking at design flaws that tools and automation cannot identify. DevSecOps teams can do this through threat modeling.
Organizations have to adopt threat modeling to uncover the inner workings of how its systems work and interact and whether they pose a threat. Identify who would want to attack your systems and where assets are to understand potential attack vectors and to best enable the appropriate security controls.
Threat modeling requires a human to think critically and be clever. With threat modeling, you’re identifying what assets are in your systems and the threat actors you should care about. Based on that, you define the threat vectors that the attackers would use to try and get to your assets. With this information, you can start assigning trust zones within your systems, determine how those interactions occur and review whether you have the right controls in place, like authentication, authorization, encryption, and error handling and logging.
3. Bring design and security together through secure code review
A recent Ponemon Institute report showed that 71% of application security professionals believe security is undermined by developers who don’t include proper security functionality early in the software development lifecycle (SDLC). But, by bringing together application security and DevOps teams in a collaborative secure code review (SCR) process, vulnerabilities can be remediated prior to cloud deployment.
Like threat modeling, SCR is a manual process where vulnerabilities are identified that automated scanners cannot detect. By introducing SCR before the first line of code is written — or as soon in the SDLC as possible — organizations can identify real vulnerabilities in advance of deployment to the cloud. This helps increase team productivity and thwarts future outside attacks. Beneficially, this also has the positive effect of decreasing costs of testing for vulnerabilities identified too late in the SDLC.
4. Being in the cloud doesn’t guarantee security
Simply, never take security for granted. For example, just because you are in the cloud and you have vendors that provide certain baselines of security control and protection, that doesn’t mean that you don’t have to think about security and protection anymore. Case in point: You may be deploying your software in the cloud, but if the software has vulnerabilities, current protections may not make sense. It’s important to still insist upon the basics, like SCR, threat modeling and other practices, that are a part of traditional deployments to understand the risk implications of the decisions you are making prior to deployment to the cloud.
For example, AWS and Azure have extensive cloud computing security efforts in place, but it is important to understand that cloud security is a shared responsibility among providers and organizations. While cloud providers will provide underlying security for the platform infrastructure, customers still need to securely configure cloud services.
This is where cloud pen testing becomes critical to organizations. Cloud pen testing is used to identify security gaps in cloud infrastructures and provide actionable guidance for remediating the vulnerabilities to improve security posture and compliance.
Unfortunately, one of the challenges today is that organizations see security as a hindrance or as overhead, a belief that can have negative ramifications, especially when data is stored in the cloud. Positively, the whole purpose of cloud-based systems is the ability to scale as needed and have elasticity.
In fact, security can also contribute to making much better-quality systems scalable and more reusable, all while managing risk. Over time, when working to build a culture of security, mature healthcare organizations — preferably, all organizations — should not only build standards around accessibility, usability and availability, but should start insisting on secure design standards for managing risk as well.
About the author
Nabil Hannan is a managing director at NetSPI. He leads the company’s consulting practice, focusing on helping clients solve their cybersecurity assessment and threat and vulnerability management needs. Hannan has over 13 years of experience in cybersecurity consulting from his tenure at Cigital/Synopsys Software Integrity Group, where he built and improved effective software security projects, such as risk analysis, pen testing, SCR and vulnerability remediation, among others.