Healthcare News & Insights

Keeping protected health information private in the era of AI

Artificial intelligence-powered technology has a lot to offer hospitals, but it also comes with risks. In this guest post, Hoala Greevy, founder and CEO of a HITRUST CSF-certified secure email solution, offers four ways healthcare facilities can keep patients’ protected health information safe while using AI-powered services to work more efficiently and effectively.

__________________________________________________________

New artificial intelligence-powered technology provides more convenient healthcare services for both healthcare organizations and their patients. A recent Accenture study reports that 20% of respondents have already used AI-based healthcare services. Its rise in popularity is no surprise, given that AI-powered technology gives healthcare organizations access to faster, better insights from the massive amount of data collected in electronic health systems.

This ease and convenience comes at a price – elevated potential for security breaches. How can healthcare leaders ensure the safety of protected health information (PHI) isn’t compromised in the rush toward the future of health care?

They may be cumbersome, but basic security best practices remain a crucial first step. Encrypting every laptop and hard drive at a healthcare organization and keeping them on the premises is a lot of work; it’s more convenient just to let employees work from their personal computers. But that opens the door for disaster when those unencrypted devices get stolen, such as when Coplin Health Systems had to notify 43,000 patients about a potential data breach after one laptop was stolen from an employee’s car. And Coplin was lucky: The prevalence of virtual health care and AI-powered technology means the breach could have been much more extensive.

Protecting patient information in the AI era

Creating a better future for healthcare providers and patients alike comes with potential pitfalls, but there are ways to minimize the risk. It’s worth learning how, because AI-powered services can help professionals more efficiently and accurately diagnose patients, sift through paperwork and keep track of prescriptions while ensuring PHI is safe.

Here are four ways to ensure such a future:

1. Require authentication.

AI must grab data from somewhere – namely, your databases. That means your systems need to be secure. Because application programming interfaces (APIs) are increasingly exploited by hackers, use SSL and proper authentication to ensure API connections are as secure as possible. Just like you must authenticate your identity when withdrawing money from a bank, the app or program should verify healthcare professionals’ identity, whether by using a PIN or an ID, before giving them information.

2. Insulate your data.

Once connections are secure, the next step is to ensure that data isn’t intermingled. To do so, it’s important to understand the data flow of the AI software and applications you’re going to use. Will the data be deidentified and used in aggregate? Or is your database secure to your own network?

The advantages of using AI are only as good as the data used to train the algorithms, which is why organizations like IBM and MIT are working together to set up AI research laboratories to prevent risks associated with content curation. It can sometimes make sense to use a lot of data in aggregate (i.e., from a pool of data vs. one database), but you need to make sure that it’s compliant – especially when interoperability is so limited.

3. Secure your database.

Your database should already be secure in order to be compliant. Still, double-check your servers, whether they’re physical or virtual. If a server gets breached, not only can data be removed, but viruses can also be injected to affect the data an AI program uses to make decisions. This can be a drastic problem if you’re relying on that data to make any clinical decisions.

Because misconfiguration exposes around 30% of online healthcare databases, issues in your systems must be identified and addressed quickly. That way, you can prevent potential threats, and mitigate risk if a breach does occur.

4. Assess your software for role-based limitations.

Do the software or applications you’re using for AI allow for role-based limitations? Insider breaches, according to the CERT Insider Threat Center, are twice as expensive and damaging as external breaches. And such risks aren’t limited to employees – it could include any person given access to networks and accounts. To make matters worse, organizations often overlook around 75% of such threats.

You want to be sure the least possible amount of data is available to end users. If employees don’t need administrative access, they shouldn’t have it. If the application doesn’t allow for different levels of permissions, you might need to move on to another vendor.

The healthcare industry struggled publicly with cybersecurity in 2018, with 8.7 million records breached in just the first nine months. Massive breaches and phishing attacks illuminated the need for organizations to take both basic and advanced security precautions. As AI solutions gain popularity and data becomes more plentiful, take steps in 2019 to protect the PHI your patients have entrusted you with.

Hoala Greevy is the founder and CEO of Paubox, the only HITRUST CSF-certified secure email solution.

Subscribe Today

Get the latest and greatest healthcare news and insights delivered to your inbox.

Speak Your Mind

*

css.php