Check out the on-demand sessions from Low-Code/No-Code Summit to learn how to innovate successfully and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Today, Snowflake is the favorite application for data of things. The company started simply data warehouse platform a decade ago but has since evolved into a comprehensive platform data cloud Supports a wide range of workloads, including one data lake.
More than 6,000 businesses now trust Snowflake to handle their data workloads and generate insights and applications to grow their business. Together they have more than 250 petabytes of data in the data cloud, with more than 515 million data workloads running every day.
Now, at such a large scale, cybersecurity concerns inevitably arise. Snowflake recognizes this and provides scalable security and access control features to ensure the highest level of security for not only accounts and users, but also the data they store. However, organizations can miss out on certain basics, leaving data clouds partially secure.
Here are some quick tips to fill these gaps and build a secure enterprise data cloud.
1. Secure your connection
First of all, all organizations using Snowflake, regardless of size, should focus on using secure networks and the SSL/TLS protocol to prevent network-level threats. According to Matt Vogt, VP of global solution architecture at immortala good way to start is to connect to Snowflake over a private IP address using your cloud provider’s private connection, such as AWS Private Link or Azure Private Link. This will create private VPC endpoints that allow a direct, secure connection between your AWS/Azure VPC and Snowflake VPC without going through the public Internet. Additionally, network access controls, such as IP filtering, can also be used for third-party integration, further enhancing security.
2. Source data protection
While Snowflake provides multiple layers of protection – such as time travel and insecurity – for imported data, these tools cannot help if the source data itself is missing, corrupted, or corrupted. intrusive (like malicious encryption for ransom) in any way. This kind of problem, as Chadd Kenney, Clumio’s vice president of product suggests, can only be solved by taking measures to protect data when it’s in object stores like Amazon S3 – before import. In addition, to protect against logical deletions, it is recommended that you maintain constant, immutable, and preferably air-gapped backups that can restore immediately to Snowpipe.
3. Consider SCIM with multi-factor authentication
Enterprises should use SCIM (a cross-domain identity management system) to help support automated provisioning and management of identities and user groups (i.e. the roles used for permissions). access to objects such as tables, views, and functions) in Snowflake. This makes user data more secure and simplifies the user experience by reducing the role of the local system account. Additionally, by using SCIM where possible, businesses will also have the option to configure the SCIM provider to synchronize users and roles with active directory users and groups.
In addition, businesses should also use multi-factor authentication to establish an additional layer of security. Depending on the interface used, such as a client that uses the driver, Snowflake UI, or Snowpipe, the platform may support multiple authentication methods, including username/password, OAuth, pairs key, external browser, federated authentication using SAML and Okta native authentication. If support for multiple methods is available, the company recommends giving OAuth first preference (snowflake OAuth or external OAuth), followed by external browser authentication, Okta native authentication, and key pair authentication.
4. Column-level access control
Organizations should use Snowflake’s dynamic data masking and external tokenization to restrict certain users’ access to sensitive information in certain columns. Dynamic data masking, which can automatically obfuscate column data based on who queries it, can be used to limit the visibility of columns based on the user’s country, for example such as US employees can only view US order data, while French employees can only view order data from France.
Both features are quite effective but they use masking policies to work. To get the most out of this feature, organizations should first determine whether they want to centralize hidden policy management or decentralize it to groups that own individual databases, depending on their needs. surname. Additionally, they will also have to use invoker_role() in policy conditions to allow unauthorized users to view aggregated data on protected columns while hiding individual data.
5. Implement unified audit model
Finally, organizations should not forget to implement a unified audit model to ensure transparency of the policies being implemented. This will help them proactively monitor policy changes, such as who created which policy gave user X or group Y access to certain data, and just as important track data access and query patterns.
To view account usage patterns, use a system-defined, read-only shared database called SNOWFLAKE. It has a schema named ACCOUNT_USAGE which contains views that provide access to audit logs for a year.
VentureBeat’s Mission is to become a digital city square for technical decision-makers to gain knowledge of transformative and transactional enterprise technology. Explore our Briefings.