Data Warehouse

Partitioning: How to Do It

Secure your information in a data warehouse

A database or a subset of a database can be divided across multiple servers; each division is referred to as a partition. Stratus and Teradata developed the first massively parallel processing (MPP) databases with their own specialized hardware. Oracle created the first portable MPP database that ran on hardware from different vendors. Oracle, Mainframe DB2 and Snowflake use a shared-everything architecture, where each database instance can see the entire partitioned database and dynamic data block ownership that a complex distributed lock manager must handle.

Other vendors use a more straightforward federated database approach in which each database instance only sees its partition. You can use partitioning on a single node if you have IO-bound queries. You can explicitly partition your data across different IO channels and storage devices in this case.

For example, if you have a powerful server that needs to perform a full table scan of a very large table, you can partition the server across multiple physical disks to overcome the physical constraints of a single device, such as bus bandwidth and slow initial seek times due to rotational delay of physical disks. Partitioning scales the IO to optimize the CPU capacity and overall scan times.

Benefits of Data Partitioning

The most common reasons organizations partition a physical database are to improve performance, availability, and load balancing. Performance improves by executing queries in parallel across more processors than a single server. Availability is improved by limiting the impact of a server failure to just the partition of data it manages. Parallel-aware database servers create query plans that distribute workloads across networked server nodes to distribute processing and return results faster.

For example, if you have a global pricing application that needs to be highly responsive, you could partition its data into three geographical datacenters. As a geographically local database has that regional pricing, it will be more responsive than a single global database, and availability will improve as performing off-peak maintenance on the USA server will not impact the availability of servers in EMEA and Asia.

As applications outgrow a single server, you don’t need to provision a bigger one to keep pace. If your database system runs on a clustered system, adding an additional smaller node can be more cost-effective. In the cloud, you can provision servers and storage on demand to cater for peak loads.

illustration of hallway with networks and services on each side. Data partitioning helps server performance during peak loads.

Cloud Data Management Strategy

On-premise data centers must provision servers with physical constraints such as the data center’s size. Public cloud platforms from Amazon, Microsoft, and Google, provide more capacity than an in-house data center. The Actian Data Platform runs on public cloud platforms; storage is decoupled from compute resources, so compute power can grow and contract based on demand, independent of physical partitions.

When to Partition a Table

Partitioning complicates database management, so you should only use it when you can get adequate performance without partitioning. Use partitioning when you need to scale database performance to support large transaction volumes and very large queries.

Partitioning Strategies

You can partition data in many ways depending on the application’s needs and the data’s content. If performance is the primary goal, you should plan to spread the load evenly to get the most throughput. For example, you may set up 64 partitions per server and use a calculated hash to spread the data evenly or distribute the data using a round-robin scheme. If the data has a naturally random key value with high cardinality, it can be bucketed into ranges of values per partition. Typically, data is partitioned horizontally to spread the workload across many servers. In the case of vertical partitioning, you may decide to divide your data into three global regions, so you have a high-level table with global values. Detailed tables below them for a geographical region can support regional autonomy.

Partitioning Methods

Databases that support clusters or MPP configurations offer multiple partitioning methods. Below is a selection of partition methods:

  • Range Partitioning – divides rows based on a key value range. If your application does sales analysis by region, dividing the data by zip code ranges may make sense.
  • List Partitioning – organizes data based on a list of values. For example, the values in the list containing ‘Scotland’, ‘England’, ‘Northern Ireland’ and ‘Wales’ could go into a partition named ‘United Kingdom’.
  • Column Partitioning – is used to split a wide table into columns where static columns are stored in one table, and the more dynamic columns are placed on a different table and server with a view being used to connect them as a single object. A columnar database can be considered vertically partitioned as each column is a table.
  • Round Robin Partitioning – inserts new rows into a different partition in a specific serial, repeating order.
  • Hash Partitioning – generates a random integer partition id based on a calculated value which can use a key based on the inserted value.
  • Key Partitioning – is like hash partitioning, with the difference being that it can evaluate multiple column values to determine the target partition.
  • Composite Partitioning – nests multiple levels of partitioning. For example, the first level of partitioning could be range-based, and the second level could be round robin within that range.

Partitioning Pruning

Partition pruning refers to an optimization that allows a query to bypass the evaluation of partitions for a particular query to improve performance. For example, if you only need to query a certain zip code range, you can specify a WHERE clause stating the lower and upper bounds. The query optimizer will generate an IN clause with just values in the range, skipping other partitions.

Actian Data Platform Partitioning

Partitioning is an essential strategy in deploying the Actian Data Platform as it is architected for parallelism. The Actian Data Platform is designed to operate efficiently, making the best use of cluster resources when at least one table in any nontrivial (joining) query is partitioned. Lack of appropriate partitioning can impact performance, so you should consider implementing partitioning where appropriate.

Partitioning in the Actian Data Platform distributes rows of a table among sub-tables (partitions). A partitioning scheme determines which rows are sent to which partitions.

Partitioning is managed automatically. To define a table with an explicit partitioning scheme, use the PARTITION= option in the WITH clause of the CREATE TABLE statement. Because automatic partitioning is the default, the WITH PARTITION clause does not need to be specified explicitly.

The Actian Data Platform supports two partitioning schemes which define a rule (distribution scheme) for assigning rows to partitions. Conceptually, a dimension defines a set of logical partitions. The following distribution types are available:

  • HASH – Distributes rows evenly among partitions using a hash value (instead of randomly). Given a value for partitioning columns, a query can predict which partition contains rows that have the matching value. Thus, a query can restrict its search to a subset of partitions. HASH is data-dependent and requires the ON clause.
  • AUTOMATIC – (Default) Randomly distributes rows among partitions.

The distribution scheme can be a default value set at the system level. The optional logical partition names must be unique for each table. The same partition name can occur in other partitioned tables. If a partition name is omitted, the system generates a name (of the form iipartnn).

Isometric vector of a server with a holographic cloud, representing cloud-based data integration technology in a digital network.

Partitioning Syntax

A table partition definition has the following format:

PARTITION = (dimension)

The syntax for each partition dimension is:

dimension = rule partitionspec{, partitionspec}

rule

Defines the type of distribution scheme for assigning rows to partitions. Valid values are:

HASH ON column{, column}

Distributes rows evenly among partitions according to a hash value.

ON column{,column}

specifies columns to partition the table on.

AUTOMATIC

(Default) Distributes rows randomly among partitions.

partitionspec

Defines the number of partitions and optionally their names:

partitionspec = DEFAULT PARTITIONS | [nn] PARTITION[S] [ ( name{, name} ) ]

where:

DEFAULT PARTITIONS

Uses the default number of partitions configured for optimum performance based on your warehouse size.

The statement returns an error if the default partition value is not set.

nn

Is the number of partitions, which defaults to 1 if omitted.

name

Identifies the partition. When the number of partitions is two or more, a comma-separated list of names can override the default value.

Default: iipartNN

Guidelines for Partitioned Tables

You should choose the partition’s key from columns that have uniform values, for example, primary/foreign keys. When you expect to have many queries that join tables A and B on the condition A.fk_col = B.col, good partitioning keys for A and B are fk_col and col, respectively.

One partition per core, per node is recommended:

num_partitions = num_nodes * K

where K is a divisor of the number of physical cores per node.

Creating an index on columns that define a foreign relationship is not allowed when tables joined by the foreign relationship do not have the same number of partitions or are not partitioned on columns (or a matching subset) used for the foreign key relationship. For example:

The following is allowed:

CREATE TABLE X (a i4 NOT NULL,
b i4 NOT NULL,
c i4 NOT NULL)
WITH PARTITION=(HASH ON a,c 2 PARTITIONS);
ALTER TABLE X ADD CONSTRAINT pk_x PRIMARY KEY (a,c);

CREATE TABLE Y (c i4 NOT NULL,
d i4 NOT NULL,
e i4)
WITH PARTITION=(HASH ON d,e 2 PARTITIONS);

ALTER TABLE Y ADD CONSTRAINT fk_y FOREIGN KEY(d,e) REFERENCES X(a,c);

CREATE INDEX idx_y ON Y(d,e);

 

Partitioning keys on c for X and e for Y is also valid.

The default AUTOMATIC partitioning scheme randomly distributes rows evenly among partitions. Unlike hash-distributed tables, rows with equal values are not guaranteed assignment to the same partition. As a result, the system typically must reorganize data before resolving the query. For example, joining two automatically partitioned tables usually requires reshuffling rows. This extra step can degrade query performance.

AUTOMATIC partitioning should be used in the following cases:

  • When insight is lacking for creating a good hash key. That is, when:
    • There is no obvious joining key
    • There is no good candidate for hash distributing the table (arbitrary data)
    • The table does not share a common join key with other tables
    • The table is a temporary staging table
  • When defining an automatically partitioned table as a preparatory step before creating a good hash key, you can use CREATE STATISTICS and then SELECT to get minimum and maximum column values and COUNTs to better choose which columns to use as the HASH distribution key

Default Distribution Scheme

You can assign a default distribution scheme when creating partitioned tables. The default distribution setting can be overridden at the session level using the SET PARTITION_SCHEME statement. You can assign a default number of partitions when creating partitioned tables. Specify WITH PARTITION = (rule ON column DEFAULT PARTITIONS) when creating or modifying tables. The table will be partitioned into the configured number of partitions. The default partition count setting can be overridden at the session level using the SET PARTITION_PARTS statement.

CREATE TABLE Examples

For Google Cloud, create a table with default AUTOMATIC partitioning:

CREATE TABLE customer (
custid INT NOT NULL DEFAULT 0,
zip CHAR(5) NOT NULL)

Create a table without partitioning:

CREATE TABLE customer (
custid INT NOT NULL DEFAULT 0,
zip CHAR(5) NOT NULL)
WITH NOPARTITION;

Create a HASH-partitioned table with 16 partitions distributed based on emp_no column:

CREATE TABLE employee (
emp_no INTEGER NOT NULL NOT DEFAULT,
emp_name CHAR(32) NOT NULL NOT DEFAULT,
dept_no INTEGER,
emp_rating INTEGER)
WITH JOURNALING,
PARTITION = (HASH ON emp_no
16 PARTITIONS);

Create a HASH-partitioned table using the default partition count:

CREATE TABLE employee (
emp_no INTEGER NOT NULL NOT DEFAULT,
emp_name CHAR(32) NOT NULL NOT DEFAULT,
dept_no INTEGER,
emp_rating INTEGER)
WITH JOURNALING,
PARTITION = (HASH ON emp_no DEFAULT PARTITIONS);

Create a table in which the Social Security number is encrypted using AES 128-bit encryption. Do not SALT the value (add 16 bytes of random bits to the field to further obfuscate the encrypted value):

CREATE TABLE socsectab (
  fname CHAR(10),
  lname CHAR(20),
  socsec CHAR(11) ENCRYPT NOSALT )
  WITH ENCRYPTION=AES128, PASSPHRASE='this is a   secret', NOPARTITION;

Create a table in which data for column c2, which contains salary data, is encrypted using AES 256-bit encryption. Salt is added to the field by default:

CREATE TABLE t1 (
  c1 CHAR(20) NOT NULL,
  c2 MONEY ENCRYPT)
  WITH ENCRYPTION=AES256, PASSPHRASE='decoder ring', NOPARTITION;

Create a table with a sampled min-max index on two columns:

CREATE TABLE sales_fact (
sales_date ANSIDATE,
value INTEGER2,
quantity FLOAT8)
WITH MINMAX=(sales_date, quantity), MINMAX_SAMPLES;

Create a table with the address and salary columns masked:

CREATE TABLE employee(
name VARCHAR(20),
address VARCHAR(20) MASKED,
salary FLOAT MASKED AS 0);

Create a partitioned table with rows distributed automatically (that is, randomly):

CREATE TABLE employee (
emp_no INTEGER NOT NULL NOT DEFAULT,
emp_name CHAR(32) NOT NULL NOT DEFAULT,
dept_no INTEGER,
emp_rating INTEGER)
WITH
PARTITION = (AUTOMATIC 8 PARTITIONS);

Create the “movies” table without partitioning:

CREATE TABLE movies AS SELECT * FROM cinema WITH NOPARTITION;

Create the “books” table with default AUTOMATIC Partitioning:

CREATE TABLE books AS SELECT * FROM titles;

If you would like to evaluate the capabilities of the Actian Data Platform, you can sign up for a free trial here.