Today’s data-driven enterprises know that making data available across the company can lead to improved results—from higher sales to better customer satisfaction to improved market share. But the business-level need to make data available has been in conflict with the business-level requirement that data be shared securely. These competing business drivers have led to an odd mismatch between what companies would like to do with data and what they have actually been able to achieve. In order to provide data both swiftly and securely to users, companies should focus on these 3 keys to faster and safer data delivery.
Know and show your data
Before you can provide data to users, you must document what data you have. This means discovering data across databases and software, in cloud SaaS platforms and on-premises in legacy databases. The data must be typed: are they names, social security numbers or email addresses? Then it should be tagged by business context: is the email address from Salesforce or HR? Is it a prospect or employee?
Once the data has been discovered, analyzed and classified, the available data types and tags can be displayed to users via a data catalog—just like any e-commerce platform. Users should be able to search for data to enable a specific use case, such as sharing custom coupons near specific locations. They can select email addresses, GPS data, available inventory and any other information needed to achieve the goal and add those to their “data shopping cart”, regardless of where the data is from or located. The backend structure is completely seamless to users. The goal is to make it as easy as ordering from Amazon.
Control and govern your data
Unfortunately, here’s where the e-commerce metaphor falls apart. While the e-commerce process is mostly automated all the way through, security-focused “default to no” and Zero Trust policies have forced companies to evaluate requests individually and manually as they come in. This leads to labor-intensive data control and release processes on the back end. Once a user “places an order” for data, that sets off a workflow that primarily consists of email notifications to one or more data stewards. What should be a 5-minute fulfillment task can actually take 3-4 days. Data stewards must check the policies to confirm whether the user is authorized to have access to the data. The requester may have asked for data from multiple locations with various owners – Snowflake data may be owned by analytics and Oracle on-prem data by IT or operations. There can often be different data stewards for different data sets, and review/approval tasks often fall on data teams who have other full-time responsibilities on top of their data steward duties.
Such a manual process is rife with human error. Data stewards could accidentally provide access that is too broad or for too long or just to the wrong data. Because human error can lead to data breaches, manual processes increase the risk of that. Still, this process was almost manageable when the requests were 1 or 2 per week. But customers are telling us that they’re seeing 1 or 2 per hour now. It’s simply not sustainable.
Unify and automate your process
Unifying and automating the entire process solves this issue and upgrades the complete data delivery experience. It makes the whole mechanism faster and more secure than the sum of its parts. The data governance tool acts as the brain, knowing who should have access to which data. Automated data discovery, analysis and cataloging with SaaS-based tools like OneTrust and Collibra allow companies to find data across the entire ecosystem, document data lineage, and type and tag it. Once that data has been identified, policy and permissions can be applied.
Then the access control tool like ALTR acts as the muscle, regulating access based on commands it receives from the brain, bypassing the time-consuming and error-prone manual authorization workflow. ALTR’s SaaS-based solution also spans across multiple database types – both on prem and in the cloud – to update access permissions in real time. This provides one central command and control center across the data ecosystem, unlike proxy-based solutions that must be implemented and managed separately for each database. And ALTR also helps define and control what “access” means – is it root level access or a reader account? Is the data masked or limited by amount? For example, HIPAA regulations require that only the minimum necessary standard of data is available to the user to achieve their task.
Finally, the constant consumption feedback ALTR provides acts as the senses, letting the brain know what data is actually being consumed, by whom. This allows teams to double check usage against existing policies and correct any misalignments. If we go back to the Amazon example, this would be as if someone bought too many masks to resell during a pandemic, and Amazon stopped the purchase as against its policies.
The promise of data delivered
When users have access to the data they need in minutes instead of days, the whole company can perform better. When unification and automation not only deliver that speed, but also reduce the risk of human error leading to a breach, the company’s entire data set is safer even as its being shared. Now the promise of data usage across the company can be delivered.
Download our White Paper to learn how the best data strategy is enabled by a strong data defense.