DATABASE
CREATE DATABASE
CREATE DATABASE creates a new database. The database name must be distinct from the name of any existing database in the system.
To create a database, you must have the CREATE DATABASE privilege for the current system.
Syntax
| Parameter | Type | Description |
|---|---|---|
database_name | string | A unique identifier for the database. The system generates an error if a duplicate name is provided. system is a reserved database name and can neither be created nor dropped. |
ocient:
DROP DATABASE
DROP DATABASE removes an existing database. This SQL statement also disconnects all users currently connected to the database.
To remove a database, you must have the DROP DATABASE privilege for the current database.
You cannot drop a database while it has any PIPELINE in a running status.
Syntax
| Parameter | Type | Description |
|---|---|---|
database_name | string | An identifier for the database to be dropped. You can drop multiple databases by specifying additional database names and separating each with commas. system is a reserved database name and can neither be created nor dropped. |
ocient.
ALTER DATABASE
ALTER DATABASE RENAME
ALTER DATABASE RENAME renames an existing database.
To rename a database, you must have the ALTER DATABASE privilege for the database.
Syntax
| Parameter | Type | Description |
|---|---|---|
old_database_name | string | The old identifier of the database for rename. |
new_database_name | string | The new identifier of the database for rename. |
oracle to ocient.
ALTER DATABASE SET SSO INTEGRATION
ALTER DATABASE SET SSO INTEGRATION configures the database to authenticate using an external SSO provider. This SSO integration is the default for connections unless you use a connectivity pool or specify a different provider.
To set a connection, you must be a system-level user or a database administrator and have an open connection to the database.
This SQL statement is an alias for ALTER DATABASE ALTER SSO INTEGRATION.
See Configuring the Ocient Database for details about configuring SSO protocols.
If your Ocient® System is version 25.0 or later, you can create multiple SSO integrations for each database. An SSO integration assigned to the database by the
ALTER DATABASE SQL statement is the primary SSO connection, unless you connect to the database with a connectivity pool that has a different SSO integration assigned to it.| Parameter | Type | Description |
|---|---|---|
database | string | The identifier of the database for configuration. |
sso_name | string | The identifier of the SSO integration to use. |
sso_test.
ALTER DATABASE ALTER SSO INTEGRATION
ALTER DATABASE ALTER SSO CONNECTION configures the database to authenticate using an external SSO provider. This SSO integration is the default for connections unless you use a connectivity pool or specify a different provider.
To alter a connection, you must be a system-level user or a database administrator and have an open connection to the database.
This SQL statement is an alias for ALTER DATABASE SET SSO INTEGRATION.
See Configuring the Ocient Database for details about configuring SSO protocols.
If your Ocient System is version 25.0 or later, you can create multiple SSO integrations for each database. An SSO integration assigned to the database by the
ALTER DATABASE SQL statement is the primary SSO connection, unless you connect to the database with a connectivity pool that has a different SSO integration assigned to it.| Parameter | Type | Description |
|---|---|---|
database | string | The identifier of the database for configuration. |
sso_name | string | The identifier of the SSO integration to use. |
sso_test.
ALTER DATABASE REMOVE SSO INTEGRATION
ALTER DATABASE REMOVE SSO INTEGRATION removes an existing SSO integration as the default connection protocol for the database. This action effectively undoes the ALTER DATABASE SET SSO INTEGRATION SQL statement.
To remove a connection, you must be a system-level user or a database administrator.
Syntax
| Parameter | Type | Description |
|---|---|---|
database | string | The identifier of the database for deletion. |
example_database.
ALTER DATABASE ALTER SECURITY
Sets the security settings at the database level using theALTER DATABASE ALTER SECURITY SQL statement. Replace <security_setting> with the security setting and <value> with the value.
Syntax
| Parameter | Data Type | Description |
|---|---|---|
database | string | The identifier of the database for setting security settings. |
security_setting | string | The security setting with values: * password_minimum_length |
password_complexity_levelpassword_no_repeat_countpassword_lifetime_dayspassword_invalid_attempt_limit
For details about these values, see Database Password Security Settings. | |value| numeric | An integer to represent one of the security settings. For details about this value, see Database Password Security Settings. |
example_db.
TABLE
CREATE TABLE
Creates a new table in the current database. The table name must be distinct from the name of any existing tables in the database unless theREPLACE keyword is specified. To use REPLACE in the CREATE TABLE statement, you must have DELETE privileges.
By default, columns are nullable unless otherwise specified.
For faster query results, you can define one TimeKey® for the table, which must be a timestamp, date, or time column, with a specified bucket resolution. Tables with a specified TimeKey can perform query operations faster, especially if they involve time filtering.
You can specify a Clustering Key composed of one or more fixed-length columns. Designating columns as cluster keys that are frequently referenced in queries can greatly improve performance.
For details about defining TimeKeys and clustering indexes, see TimeKeys and Clustering Keys.
See the Data Types section for table-supported data types.
To create a table, you must have the CREATE TABLE privilege for the current database.
For examples, see CREATE TABLE SQL Statement Examples.
| Parameter | Type | Description |
|---|---|---|
table_name | string | A unique identifier for the table. table_name must be distinct from the name of any existing tables in the database unless the REPLACE keyword is specified. |
query | string | A SELECT query used to load data into the newly created table. For details, see CREATE TABLE AS SELECT. |
Column Definition (<column_definition>)
The parameters listed here are required for defining each column in a table.
| Parameter | Type | Description |
|---|---|---|
column_name | string | An identifier for a column to be included in the newly created table. |
data_type | string | The data type of a specified column. For a list of supported data, see Data Types. |
Clustering Key and Index Definition (<clustering_definition>)
The parameters listed here are required for defining a Clustering Key or clustering indexes on a table.
For details about how to apply clustering columns, see Clustering Key.
| Parameter | Type | Description |
|---|---|---|
key_name | string | An identifier for the Clustering Key. |
ck_col1, ck_col2 [, ...] | string | A series of specific columns comprising the Clustering Key. Clustering Key columns must not be nullable. Specify NOT NULL in the column definition for the respective columns. For details, see column definition.No limit exists on the number of columns for the Clustering Key. |
index_name | string | Optional. An identifier for a clustering index. You must include the definition of columns that comprise the index using the idx_col1, idx_col2 [, ...] parameter. |
idx_col1, idx_col2 [, ...] | string | Optional. A series of specific columns comprising a clustering index. You can apply clustering indexes only to columns included in the Clustering Key. Specify any number of columns in any specified order. You must include the identifier for the index using the index_name parameter. |
TimeKey Definition (<timekey_definition>)
This syntax example is for a single TimeKey column, which can be included in the column definition of a CREATE TABLE statement. For the full syntax, see CREATE TABLE.
For details about using TimeKeys, see TimeKeys.
| Parameter | Type | Description |
|---|---|---|
column_name | string | An identifier for the column. |
data_type | string | Optional. The data type of the column. For the TimeKey column, this column should be DATE or TIMESTAMP type. The TimeKey column can also support INT or BIGINT type, but these data types do not use a bucket_value argument. |
bucket_granularity | numeric | The granularity of the TimeKey column, based on the specified TIME type in bucket_value. |
bucket_value | string | The TIME type to parse the TimeKey column. Supported values include: [ DAY, HOUR, MINUTE, SECOND ]For example, BUCKET(1, DAY) sets the time-bucket granularity to a fixed width of one day. |
Column Constraint (<column_constraint>)
The parameters listed here include constraints and other configurations for individual columns. For best performance, one column with date or time data in each table should be defined as the TIME KEY with a specified bucket_value.
For details about TimeKey columns, see TimeKeys.
| Parameter | Type | Description |
|---|---|---|
bucket_granularity | numeric | The granularity of the TimeKey column based on the specified time type in bucket_value. |
bucket_value | string | The time type to parse the TimeKey column. Supported values include: [ DAY, HOUR, MINUTE, SECOND ]For example, BUCKET(1, DAY) sets the time-bucket granularity to a fixed width of one day. |
literal | depends on the column data type | If specified as a constraint, sets the default value for the column. The value must be a literal that is enclosed in quotes. Do not include an expression with this argument to cast this value to the data type of the column. The system automatically attempts type coercion to convert the literal to the data type of the column. For example, this CREATE TABLE statement includes a default value for its Universally Unique IDentifier (UUID) column: CREATE TABLE example_table (``col1 UUID NOT NULL DEFAULT '00000000-0000-0000-0000-000000000000' );The supported geospatial data types POINT, LINESTRING, and POLYGON require WKT formatting. For formatting examples, see the Well-known text representation of geometry. |
comment | string | An optional comment for the column. |
compression_value | numeric | An integer value of either 1, 2 or 4 is used to define the storage of COMPRESSION GDC. The compression of the values is defined as follows: - A compression_value of 1 can hold up to 255 unique values.
- A compression_value of 2 can hold up to 65535 unique values.
- A compression_value of 4 can hold millions of unique values. For tuple columns, compression is specified for each tuple value rather than with other constraints. The example here defines a tuple column with GDC compression only for the first inner VARCHAR value.my_tuple TUPLE<<INT, VARCHAR(255) COMPRESSION GDC(1), VARCHAR(255)>>It is not recommended to use compression on a column that will contain more than one million unique values. |
schema_name | string | Fully qualified column name or a column name if you specify a column in the same table. If EXISTING is specified as a compression constraint, the compression GDC reuses the system lookup table for the schema_name column, rather than creating a new one. The existing column must be in the same database as the new column.schema_name should include the schema, table, and column names, each separated by periods and enclosed in double-quotation marks, for example: "schema_name"."table_name"."column_name"If an existing column is specified, any new column that uses the existing column’s system lookup table must be deleted before the existing column can be deleted. |
compression_scheme | string | The type of compression used for the column. Supported values include: [ COMPRESSION NONE | COMPRESSION ZSTD | COMPRESSION DYNAMIC ]If no compression setting is specified, the compression defaults to COMPRESSION DYNAMIC for fixed-length columns and COMPRESSION NONE for variable-length columns.COMPRESSION ZSTD applies to VARCHAR columns as well as other data types. For the COMPRESSION DYNAMIC setting, the Ocient System applies LZ4 compression only if the column data is dynamically determined to be compressible. For fixed-length columns, COMPRESSION DYNAMIC applies delta-delta compression. For details about Ocient-supported compression schemes, see Table Compression Options. |
Create Option (<create_option>)
The parameters listed here include various options to configure table storage space, segments, redundancy, streamloading, and indexes.
| Parameter | Type | Description |
|---|---|---|
storage_space_name | string | An identifier for the STORAGESPACE. |
segment_value | numeric | A value to define the size of the segment. |
segment_part | string | Specifies the segment part redundancy of the table. Supported values include: { DATA | MANIFEST | INDEX | SUMMARY_STATS | STATS }These settings are defined as follows: DATA: The actual data for the table.
MANIFEST: Header information stored about the data, which describes how to locate any specified cluster of rows within the data.
INDEX: The index of the data used for quicker lookups and better query performance.
SUMMARY_STATS: A collection of statistics on the data that includes compression, row count, and average column size.
STATS: Used in the optimizer, the probability density function and combinable distinct estimators used to make better optimizations to query plans. |
redundancy_scheme | string | The redundancy scheme. Supported values include: { COPY | PARITY }These settings are defined as follows: COPY: A copy of the bytes is stored throughout the storage cluster to ensure redundancy. This option uses more storage but is faster during rebuilds and node outages.PARITY: Using the parity encoding specified on the storage cluster, this option uses parity bits to ensure redundancy for the data. This option uses less storage but is slower during rebuilds and node outages. |
streamloader_json | string | A JSON string that defines the streamloader parameters. For details, see ALTER TABLE STREAMLOADER PROPERTIES. |
trades.
The table uses the TIMESTAMP column created_at as the TimeKey, with the granularity set at 1 hour. The columns ticker_symbol and t_type are defined as the table clustering keys. The example also includes a streamloader property pageQueryExclusionDuration to delay how soon data pages that were recently added can be included in query results.
CREATE TABLE AS SELECT (CTAS)
CTAS provides the ability to create and load a new table from the result of a query on one or more existing tables. The first column of the query result maps to the first column of the new table definition, the second column maps to the second column of the new table, and so on. The new table is available for querying after it has been created, and the entire result set from the query has been loaded into the table. When you receive a response to the CTAS SQL statement, the load is complete and the table is ready. When you create a table from aSELECT SQL statement, the schema for the table can be automatically determined based on the query results. You can override this behavior with an alternative schema, provided the query results can automatically be cast to the target column types. CTAS also supports all syntax options for the new table. CTAS does not support default values and explicit nullable definitions on the column of the table.
CTAS statements support secondary and prefix indexes.
To create a table, you must have both the CREATE TABLE privilege for the current database and the SELECT privilege on all referenced tables and views.
For syntax and parameter information, see CREATE TABLE.
Default Table Definitions
By default, a new table created with a CTAS statement retains column names, data types, and nullable definitions from the queried table. You can override this configuration with alternate table definitions in the CTAS statement. Examples These CTAS examples select columns from theoriginal_table table that this CREATE TABLE statement defines. This table contains these columns:
col_int— Non-nullable integer with the default value123456789col_bigint— Non-nullable 8-byte signed integercol_id— Non-nullable integercol_point— Non-nullable point with the default valuePOINT(0 0)col_timestamp— Non-nullable TimeKey with granularity set at 1 daycol_varchar— Variable-length character string with a maximum length of 255 characters and Zstandard compression
ck using the col_bigint and col_id columns. It also has two secondary indexes: a hash index idx_01 on the col_varchar column and a spatial index on the idx_02 column.
original_table table.
basic_ctas table includes all the columns and data types from the original table definition. However, it does not include the segment keys, indexes, or the compression on the col_varchar column. The EXPORT TABLE SQL statement shows the differences in the basic_ctas table.
REDUNDANCY, STORAGESPACE, and SEGMENTSIZE are all default table settings.
CTAS Using a Full Table Definition
This example CTAS statement includes a more detailed table definition. The definition includes new columns for the TimeKey, Clustering Key, and secondary indexes. The example also makes various changes from the original_table schema:
- Different column default value
- Different compression scheme (dynamic compression)
- New TimeKey granularity of 1 hour
- Three columns in the Clustering Key
- Different secondary index types (
NGRAMandSPATIAL)
col_int, col_bigint, and col_id, from the original table to insert into the new subset table. The example also specifies alternate table options for REDUNDANCY and SEGMENTSIZE.
Due to limitations of the JDBC API, the reported modified row count might not be accurate for tables larger than two billion rows.
col_int_multiplycolumn is the multiplication of thecol_intvalues by 10.col_month_addcolumn is the result of adding three months to eachcol_timestampcolumn value.col_yearcolumn is the extraction of the year value from eachcol_timestampcolumn value.col_substringcolumn contains the first three characters from eachcol_varcharcolumn value.
CTAS USING LOADERS
Specify one or more Loader Nodes for executing the CTAS SQL statement. If you do not use this option, the Ocient System uses all Loader Nodes that are live to execute the SQL statement. This statement is useful for managing loading operations, particularly when balancing multiple loads of different sizes and resource requirements. Alternatively, this statement can also help simplify small batch loads by sourcing the data from a single Loader Node. Syntax| Parameter | Type | Description |
|---|---|---|
streamloader | string | A unique name for the Loader Node. Identify the names of Loader Nodes from the sys.nodes table by using this query: SELECT name FROM sys.nodes;If the name of the Loader Node contains special characters, you must enclose it in quotes, such as "stream-loader1". |
query | string | A SELECT query that defines values or a table and any of its columns to use for data in the specified table_name table. |
For the query to execute successfully, the specified names of the Loader Nodes must:
- Identify nodes that are live.
- Identify nodes that have the Loader role.
my_schema.my_ctas_table_2 with a clustering index named idx on the int_col column with the values in the int_col column in the table named my_schema.my_table. Use the Loader Node named stream-loader1 to execute this SQL statement.
stream-loader2 and stream-loader3.
DROP TABLE
DROP TABLE removes one or more existing tables in the current database, along with all associated views.
This action cannot be undone.
DELETE TABLE privileges for the table.
Syntax
| Parameter | Type | Description |
|---|---|---|
table_name | string | A unique identifier for the table. You can drop multiple tables by specifying additional table names and separating each with commas. |
employees.
employees and departments.
IF EXISTS statement to convert the error to a warning. If you execute the DROP TABLE statement and only some of the tables exist while other tables are missing, the database drops the existing tables and returns warnings for each missing table.
ALTER TABLE
ALTER TABLE RENAME
ALTER TABLE RENAME renames an existing table.
To rename a table, you must have the ALTER TABLE privilege for the table.
Syntax
| Parameter | Type | Description |
|---|---|---|
table_name | string | A unique identifier for the table. |
old_table_name | string | The name of the table to alter. |
new_table_name | string | The new name to replace old_table_name. |
us.employees to mid_west_employees:
us.employees to north_america.employees:
ALTER TABLE RENAME COLUMN
ALTER TABLE RENAME COLUMN renames an existing column.
To rename a column, you must have the ALTER TABLE privilege for the table.
Syntax
| Parameter | Type | Description |
|---|---|---|
table_name | string | A unique identifier for the table. |
old_column_name | string | The name of the table column to alter. |
new_column_name | string | The new column name to replace old_column_name. |
name in the table employees in the current database and schema to first_name:
ALTER TABLE ADD COLUMN
ALTER TABLE ADD COLUMN adds a new column to the table.
To add a column, you must have the ALTER TABLE privilege for the table.
New columns must either be nullable or specify a default value.
For a defined list of column parameters, see Column Definition. For constraints, see Column Constraints.
Syntax
| Parameter | Type | Description |
|---|---|---|
table_name | string | A unique identifier for the table. |
employees table in the current database and schema with the default value of 0.
ALTER TABLE ALTER COLUMN COMPRESSION
ALTER TABLE ALTER COLUMN COMPRESSION alters an existing column in the table to change its compression scheme.
Supported compression schemes are COMPRESSION NONE, COMPRESSION DYNAMIC, and COMPRESSION ZSTD.
Altering the compression setting of a column only affects compression for data loaded after you execute the SQL statement.
| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table containing the column to alter. |
column_name | string | The name of the column to alter. |
compression_scheme | string | Supported values for the compression schemes are: COMPRESSION NONE specifies no compression applied.COMPRESSION DYNAMIC applies only if the column data is dynamically determined to be compressible.COMPRESSION ZSTD applies to VARCHAR columns as well as other data types. For this option only, you can specify these additional parameters:compression_level — This value signifies how much compression the data receives. The default value is 0. The full range of values is from -7 through 15. The database uses less memory when this value is lower, whereas more memory when this value is larger. Larger values provide better compression.dictionary_size — Dictionary size specified as a positive integer that signifies the size of the shared compression dictionary in bytes. The default value is 32768 (32K). The full range of values is from 4096 (4K) through 1048576 (1MB). This value denotes the amount of memory consumed during segment generation. In general, larger values provide better compression but use more memory. |
employee_name in the table employees in the current database and schema.
employee_name in the table employees in the current database and schema.
ALTER TABLE ALTER REDUNDANCY
ALTER TABLE ALTER REDUNDANCY alters the segment part redundancy for future segments of an existing table.
Note that altering a segment part redundancy setting only affects data loaded after applying the SQL statement.
| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table that you want to alter. |
segment_part | string | Specifies the segment part redundancy of the table. Supported values include: { DATA | MANIFEST | INDEX | PDF | CDE | STATS }These settings are defined as follows: DATA: The actual data for the table.MANIFEST: Header information stored about the data, which describes how to locate any given cluster of rows within the data.INDEX: The index of the data used for quicker lookups and better query performance.STATS: Used in the optimizer, the probability density function and combinable distinct estimators used to make better optimizations to query plans. |
redundancy_scheme | string | The redundancy scheme. Supported values include: { COPY | PARITY }These settings are defined as follows: COPY — The system stores a copy of the bytes throughout the storage cluster to ensure redundancy. This option uses more storage but is faster during rebuilds and node outages.PARITY — The system uses the parity encoding specified on the storage cluster, and uses parity bits to ensure redundancy for the data. This option uses less storage but is slower during rebuilds and node outages. |
STATS part to COPY redundancy.
ALTER TABLE DROP COLUMN
ALTER TABLE DROP COLUMN drops an existing column from the table.
You cannot remove the TimeKey column and the clustering key columns from the table.
When you remove a column, the database does not remove or free any actual data.
| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table to alter. |
column_name | string | The name of the column to drop. |
address from the table employees.
ALTER TABLE STREAMLOADER_PROPERTIES
ALTER TABLE STREAMLOADER_PROPERTIES resets the table streamloader properties to the provided string. The properties string must be in valid JSON format.
The database registers streamloader changes dynamically. Therefore, you do not need to restart nodes or take other actions for the changes to take effect.
Any properties not specified in the string default to the system-wide setting.
| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table to alter. |
streamloader_json | string | The streamloader properties to alter. See this table for a list of all supported properties. |
Configuring Streamloader Properties
STREAMLOADER_PROPERTIES is a field on the table metadata that must be written as a JSON string in order to be read properly. The database can dynamically render any changes to STREAMLOADER_PROPERTIES with the ALTER TABLE SQL statement. You can set Loader Node properties for a new table as a parameter in the CREATE TABLE SQL statement. You do not need to restart the database node for the changes to take effect.
Per-Table Streamloader Properties
| Parameter | Data type | Description |
|---|---|---|
pageQueryExclusionDuration | Integer in nanoseconds(ns) or a string with the suffix ns, us, ms, or s appended. For example: "10s" = 10 seconds, "1000us" = 1,000 microseconds | Per-table configuration for the time interval for pages that should be excluded from queries. The database excludes pages with time column values that are greater than the duration of the query. A value of 0 means the database does not exclude any pages. By default, this value is set to 0 if not specified. |
employees to {"pageQueryExclusionDuration" : "30s"}. This means that any pages added less than 30 seconds ago will not be included in query results.
ALTER TABLE DISABLE INDEX
TheALTER TABLE DISABLE INDEX statement instructs future queries not to use the specified indexes, but existing segments and new segments continue to have the index available in case you enable the index again.
All secondary indexes except for secondary clustering key indexes can be disabled. Trying to disable other types of indexes generates an error.
You can specify the index by name or UUID.
Syntax
| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table to alter. |
index_name_or_uuid | string | The name or UUID of the index to disable. If you specify the index by name, the name must match an existing index. If you specify a UUID instead, it does not have to match an existing index. Therefore, you can disable a dropped index using its UUID, which ensures it is not used within old segments that were loaded with the index. You can get a list of index names and UUIDs in your database by referencing the sys.indexes table in the system catalog. |
current_idx on the table employees.
5c15d8de-36fa-4055-9bdc-3f1750aaeea0.
current_idx and other_idx on the table employees.
ALTER TABLE ENABLE INDEX
TheALTER TABLE ENABLE INDEX statement reverts the operation performed by the ALTER TABLE DISABLE INDEX statement.
Syntax
| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table to alter. |
index_name_or_uuid | string | The name or UUID of the index to enable. If you specify the index by name, the name must match an existing index. If you specify a UUID instead, it does not have to match an existing index. Therefore, you can disable a dropped index using its UUID, which ensures it is not used within old segments that were loaded with the index. You can get a list of index names and UUIDs in your database by referencing the sys.indexes table in the system catalog. |
current_idx on the table employees.
5c15d8de-36fa-4055-9bdc-3f1750aaeea0.
current_idx and other_idx on the table employees.
DELETE FROM TABLE
Removes rows from the specified table. You can use theWHERE clause to specify the rows to remove. If a DELETE SQL statement lacks the WHERE clause, then the database deletes all rows in the table.
To use this statement, you must have the DELETE privilege for the table.
For details and examples, see Remove Records from an Ocient System.
DELETE actions cannot be undone. If a DELETE operation fails during execution, the database rolls back the changes and returns to its original state. Due to limitations of the JDBC API, the reported modified row count might not be accurate for
DELETE operations that are larger than two billion rows.| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table, specified as a string, indicates where to delete rows. |
cte | string | A common table expression that defines temporary data for the DELETE statement. For details about using common table expressions, see WITH. |
<filter_clause> | None | A logical combination of predicates that filter the rows to delete based on one or more columns. For details, see the WHERE clause. The DELETE SQL statement removes all rows from a table if you do not include the WHERE clause. |
DELETE SQL statement removes all rows in the movies table that have a budget of less than 10000.
WITH keyword to find rows representing all transactions that occurred before 2022 that are less than $100. The DELETE SQL statement receives the results from the common table expression. Then, the database executes this statement to delete the corresponding rows.
EXPORT TABLE
EXPORT TABLE shows the CREATE TABLE statement for an existing table in the current database.
To export a table, you must have the SELECT TABLE privilege for the table.
| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table that you want to export. |
trades.
INSERT INTO TABLE
INSERT INTO inserts rows into a table in the current database using the results of a SQL query. The column list of the query must match the columns in the table and the query results. The first column of the query result maps to the first column of the existing table definition, the second column maps to the second column of the existing table, and so on.
The column list defaults to all columns in the table if you do not specify any column names. The database does not require every column in the table to be populated from the query. If you do not choose a column to be populated, the database inserts default values for the column. The database inserts NULL values if the column does not have a default value and the column can contain NULL. If neither a default value nor NULL can be inserted, the operation fails.
Due to limitations of the JDBC API, the reported modified row count might not be accurate for insert operations that are larger than two billion rows.
| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table for insertion. |
col1, col2 [, ... ] | string | The specific columns for insertion of new values. |
cte | string | A common table expression that defines temporary data for the INSERT statement. For details about using common table expressions, see WITH. |
query | string | A SELECT query that defines values or a table and any of its columns that should be inserted into the specified table_name. |
row_col1, row_col2 [, ...] | string | The specific values to insert into columns in the table. |
system.table_b into system.table_a.
system.table_b.id_col_b into system.table_a.id_col_a and system.table_b.int_col_b into system.table_a.int_col_a.
CREATE TABLE statement is included to demonstrate the table schema.
sales table before inserting rows into the monthly_sales_summary table.
The example uses the monthly_sales_summary table created by this CREATE TABLE statement with these non-nullable columns:
product_id— Product identifiermonth— Month part of the datetotal_quantity— Total quantity of the product
WITH keyword extracts the month from the sale date sale_date and calculates the sum of the quantity sold total_quantity of the product from the sales table before inserting this data. Then, the INSERT SQL statement specifies to insert the data into the monthly_sales_summary table.
INSERT INTO TABLE USING LOADERS
Specify one or more Loader Nodes for executing theINSERT INTO SQL statement. If you do not use this option, the Ocient System uses all Loader Nodes that are live to execute the SQL statement.
This statement is useful for managing loading operations, particularly when balancing multiple loads of different sizes and resource requirements. Alternatively, this statement can also help simplify small batch loads by sourcing the data from a single Loader Node.
Syntax
| Parameter | Type | Description |
|---|---|---|
streamloader | string | A unique name for the Loader Node. Identify the names of Loader Nodes from the sys.nodes table by using this query: SELECT name FROM sys.nodes;If the name of the streamloader contains special characters, you must enclose it in quotes, such as "stream-loader1". |
For the query to execute successfully, the specified names must:
- Identify nodes that are live.
- Identify nodes that have the Loader role.
system.table_b.id_col_b into system.table_a.id_col_a and system.table_b.int_col_b into system.table_a.int_col_a. Use the Loader Node named stream-loader1 to execute this SQL statement.
stream-loader2 and stream-loader3.
TRUNCATE TABLE
TRUNCATE TABLE removes some or all records from an existing table in the current database. The system deletes the truncated data, but the table and its schema remain intact in the system even if all data is deleted. If the entire table is truncated, Global Dictionary Compression tables remain in place.
To truncate a table, you must have the DELETE privilege for the table.
To remove a subset of rows from a table, you can use the DELETE SQL statement.
For details and examples of using TRUNCATE, see Remove Records from an Ocient System.
This action cannot be undone.
| Parameter | Type | Description |
|---|---|---|
table_name | string | The name of the table to truncate. |
students.
us.students.
students.
us.students.
VIEW
CREATE VIEW
CREATE VIEW creates a new view in the current database or replaces an existing view. For view creation, the name of the view must be distinct from the name of any existing views in the database.
To create a view, you must have both the CREATE VIEW privilege for the current database and the SELECT privilege on all directly referenced tables and views in the query.
You must have DROP privileges on the view. You also must have CREATE VIEW privileges on the database.
Syntax
| Parameter | Type | Description |
|---|---|---|
view_name | string | A distinct identifier used to name the view. |
query | string | A SELECT query that defines the data from a table used to create the view. |
option_trades.
DROP VIEW
DROP VIEW removes one or more existing views in the current database.
To remove a view, you must have the DROP VIEW privilege for the view.
Syntax
| Parameter | Type | Description |
|---|---|---|
view_name | string | The identifier of the view to drop. You can drop multiple views by specifying additional view names and separating each with commas. |
star_employees.
star_employees and bad_employees.
ALTER VIEW RENAME
ALTER VIEW RENAME renames an existing view.
To rename a view, you must have the ALTER VIEW privilege for the view.
Syntax
| Parameter | Type | Description |
|---|---|---|
old_view_name | string | The identifier of the view to rename. |
new_view_name | string | The new name for the specified view. |
star_employees to star_mid_west_employees.
us.star_employees to us.star_mid_west_employees.
ALTER VIEW AS
ALTER VIEW AS modifies an inner query of the existing view.
To modify the query for an existing view, you must be a system-level user or have the ALTER VIEW privilege for the view. You must also have the SELECT privilege on all referenced tables and views in the new query.
Syntax
| Parameter | Type | Description |
|---|---|---|
view_name | string | The identifier for the view to alter. |
query | string | A SELECT query that defines the new data from a table used to alter the view. |
star_employees.
EXPORT VIEW
EXPORT VIEW shows the CREATE VIEW statement for an existing view in the current database.
To export a view, the logged-in user must be a system-level user or have the READ VIEW right for the view.
Syntax
| Parameter | Type | Description |
|---|---|---|
view_name | string | The identifier for the view to export. |
students.
INDEX
CREATE INDEX
CREATE INDEX creates a new secondary index.
Indexes help optimize database queries when created on columns that are frequently referenced. For more information on how Ocient indexes operate, see Secondary Indexes.
Creating an index does not trigger re-indexing of existing segments. Only segments generated after the CREATE INDEX is issued contain the new index.
Indexes can be created on columns containing various different data types as long as the requirements are met. Please note that depending on the data type, the system can assign different index types by default if you decline to specify which index type to use. The name must be distinct from the name of any existing index on the table.
You can apply indexes regardless of whether they have GDC compression.
| Parameter | Type | Description |
|---|---|---|
index_name | string | An identifier for the index to create. The name must be distinct from the name of any existing index on the table. |
table | string | The name of the table for the index. |
column_name | string | The name of the column for the index. Identical indexes on the same column are not allowed. A column can only have multiple indexes if they are of different types or parameters. |
n_value | integer | Optional. When used with an NGRAM index, this numeric value specifies the character length of the substrings to be indexed. If unspecified, this value defaults to 3. |
Index Types (<index_type>)
Ocient supports four index types alongside the clustering index: INVERTED, HASH, NGRAM, and SPATIAL.
An index notionally stores a mapping of a column value to the rows that contain that value, and the index type differentiates the format in which the column values are stored and accessed.
Unless an index type is explicitly specified with a USING clause, the data type of a column determines a default index type that the system creates. For information on index type defaults, see Index Type Requirements and Defaults.
For container data types (e.g., arrays and tuples), the index stores the internal elements of the container, and is used on predicates that target the internal values. However, a mapping of NULL column values is generally stored for both scalar and container data types, so the index can always be used for column IS NULL predicates.
| Index Type | Primary Data Types | Primary Usage Description |
|---|---|---|
INVERTED | Fixed-length numeric columns. | Stores whole column value internally, meaning its storage size is approximately the same as the width of the data type. Supports lookups using strict equality or range comparisons. For details, see the INVERTED index section. |
HASH | Variable-length character columns | Stores a hash of the indexed column value rather than the full value. Primarily used for exact comparisons. For details, see the HASH index section. |
NGRAM | Variable-length character columns | Stores substrings equal in size to its n_value. Storage requirements can greatly vary depending on column data size, width and cardinality. Supports exact string comparison and filters including LIKE, NOT LIKE, SIMILAR TO and NOT SIMILAR TO.For details, see the NGRAM index section. |
SPATIAL | Geospatial columns (POINT, LINESTRING, POLYGON) | Groups geographic objects for bounding-box filtering. For details, see the SPATIAL index section. |
new_idx on the address column of the table. Because address is a VARCHAR column, this index defaults to the HASH index type.
address column. As the NGRAM has no specified n_value, it defaults to indexing substrings of three characters long.
tuple_col column. As this column is of data type INT, the index defaults to using the INVERTED type.
point_col column. As this column is of data type POINT, the index defaults to using the SPATIAL type.
DROP INDEX
DROP INDEX drops a secondary index on a table.
After an index is dropped, new segments that are generated do not contain the new index. However, no existing segments will be altered. This means that until a segment is rebuilt, you can still use the removed index internally and the system does not reclaim the storage space the removed index occupied.
Syntax
| Parameter | Type | Description |
|---|---|---|
index_name | string | An identifier for the index to drop. |
table_name | string | The name of the table with the index to drop. |
new_idx on the employees table.

