Data Dictionary – user_constraints/user_cons_columns is slow in Oracle 12cR1

Recently one of our client after upgrade, complaint about sluggish performance for one of the module comprise of getting table information in Oracle 12c R1.
Impacted SQL, comprise of Data Dictionary tables getting Table metadata.

SELECT
UTC.TABLE_NAME,
UTC.COLUMN_NAME,
UTC.COLUMN_ID AS COLUMN_ORDER,
UTC.DATA_TYPE,
UTC.DATA_TYPE || ' ' ||
DECODE(DECODE(UTC.DATA_TYPE, 'VARCHAR2', 'C', 'NVARCHAR2', 'C', 'CHAR', 'C', ''), 'C', '(' || UTC.CHAR_LENGTH || ') ', ' ') ||
DECODE(UTC.NULLABLE, 'Y', 'NULL ', 'N', 'NOT NULL ') AS TYPE_EXTENDED,
DECODE(UCC.POSITION, NULL, 'FALSE', 'TRUE') AS IS_PK,
UCC.POSITION AS PK_ORDER,
DECODE(UTC.NULLABLE, 'Y', 'TRUE', 'N', 'FALSE', 'TRUE') AS IS_NULLABLE,
DECODE(US.SEQUENCE_NAME, NULL, 'FALSE', 'TRUE') AS IS_IDENTITY,
DECODE(DECODE(UTC.DATA_TYPE, 'VARCHAR2', 'C', 'NVARCHAR2', 'C', 'CHAR', 'C', ''), 'C', UTC.CHAR_LENGTH, UTC.DATA_PRECISION) AS DATA_PRECISION,
UTC.DATA_SCALE,
UTC.DATA_DEFAULT AS DEFAULT_VALUE
FROM USER_TAB_COLUMNS UTC
LEFT OUTER JOIN USER_CONSTRAINTS UC
ON UC.CONSTRAINT_TYPE = 'P'
AND UC.TABLE_NAME = UTC.TABLE_NAME
LEFT OUTER JOIN USER_CONS_COLUMNS UCC
ON UCC.CONSTRAINT_NAME = UC.CONSTRAINT_NAME
AND UCC.TABLE_NAME = UTC.TABLE_NAME
AND UCC.COLUMN_NAME = UTC.COLUMN_NAME
LEFT OUTER JOIN USER_SEQUENCES US
ON US.SEQUENCE_NAME = UTC.TABLE_NAME || '_' || UTC.COLUMN_NAME
WHERE
UTC.TABLE_NAME = $$TableName
AND UTC.TABLE_NAME NOT IN ('PLAN_TABLE')
ORDER BY UTC.TABLE_NAME, UTC.COLUMN_ID ;

Based on initial investigation, identified bugs already exists for Oracle 12c R1 release impacting Data Dictionary for “USER_CONSTRAINTS” and “USER_CONS_COLUMNS”
Data Dictionary Select Taking A Very Long Time in 12c (Doc ID 2251730.1)

Instead of Altering Session as solution, we suggested Client to include below hint.

SELECT /*+ OPT_PARAM('_OPTIMIZER_SQU_BOTTOMUP','FALSE')*/
UTC.TABLE_NAME,
UTC.COLUMN_NAME,
UTC.COLUMN_ID AS COLUMN_ORDER,
UTC.DATA_TYPE,
UTC.DATA_TYPE || ' ' ||
DECODE(DECODE(UTC.DATA_TYPE, 'VARCHAR2', 'C', 'NVARCHAR2', 'C', 'CHAR', 'C', ''), 'C', '(' || UTC.CHAR_LENGTH || ') ', ' ') ||
DECODE(UTC.NULLABLE, 'Y', 'NULL ', 'N', 'NOT NULL ') AS TYPE_EXTENDED,
DECODE(UCC.POSITION, NULL, 'FALSE', 'TRUE') AS IS_PK,
UCC.POSITION AS PK_ORDER,
DECODE(UTC.NULLABLE, 'Y', 'TRUE', 'N', 'FALSE', 'TRUE') AS IS_NULLABLE,
DECODE(US.SEQUENCE_NAME, NULL, 'FALSE', 'TRUE') AS IS_IDENTITY,
DECODE(DECODE(UTC.DATA_TYPE, 'VARCHAR2', 'C', 'NVARCHAR2', 'C', 'CHAR', 'C', ''), 'C', UTC.CHAR_LENGTH, UTC.DATA_PRECISION) AS DATA_PRECISION,
UTC.DATA_SCALE,
UTC.DATA_DEFAULT AS DEFAULT_VALUE
FROM USER_TAB_COLUMNS UTC
LEFT OUTER JOIN USER_CONSTRAINTS UC
ON UC.CONSTRAINT_TYPE = 'P'
AND UC.TABLE_NAME = UTC.TABLE_NAME
LEFT OUTER JOIN USER_CONS_COLUMNS UCC
ON UCC.CONSTRAINT_NAME = UC.CONSTRAINT_NAME
AND UCC.TABLE_NAME = UTC.TABLE_NAME
AND UCC.COLUMN_NAME = UTC.COLUMN_NAME
LEFT OUTER JOIN USER_SEQUENCES US
ON US.SEQUENCE_NAME = UTC.TABLE_NAME || '_' || UTC.COLUMN_NAME
WHERE
UTC.TABLE_NAME = $$TableName
AND UTC.TABLE_NAME NOT IN ('PLAN_TABLE')
ORDER BY UTC.TABLE_NAME, UTC.COLUMN_ID ;

Post changes Performance improve from 7 Sec to 1 Sec for Data Dictionary SQL.

Advertisements
Posted in Oracle 12c New Features, Oracle Internal | Tagged , , , | Leave a comment

Exploration of SQLite

Recently got a chance to work on SQLite for a requirement to Purge Data based on Date Conditions.Collaborating all learning as part of perks of working on something new.

1. Date Function to Add Days in SQLite.

As part of DATE functions, we can specified Days we need to add or delete.

SELECT DATE('now') ,  DATE('now' , '1 days') , DATE('now' , '-1 days')

2. Rebuild Index in SQLite

We have “REINDEX” command in SQLite to Rebuild a particular index or all indexes on a table.

3. ListAGG in SQLite

As Grouping function, SQLite provide “GROUP_CONCAT” to Concat and Group it.

WITH ALIAS1(COL1) AS
(SELECT 'HELLO' UNION
SELECT 'WORLD')
SELECT group_concat(COL1, ' ') FROM ALIAS1;

4. Database Metadata Information.

SELECT *
FROM SQLITE_MASTER
Posted in SQLite Learning | Tagged , , , | Leave a comment

Oracle 19c – Hint Reports with DBMS_XPLAN

With Oracle 19c, we got a feature which was most awaited one.
It tries to answer, Whether hint is getting applied, its valid and getting used or not with DBMS_Xplan package.

Initially, we used to get 10053 trace to understand whether hint is getting applied or not?
Most of the developer don’t have access to enabling trace files or getting it from Trace dump directories.
With new feature of Hint Report within DBMS_XPLAN packages, leverage developer confidence on any hint applied.

For all Demo, we have use https://livesql.oracle.com

select * from v$version;
BANNER
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

lets create table for our demo,

DROP TABLE TAB_TEST_HINT;
CREATE TABLE TAB_TEST_HINT
AS
(
SELECT ROWNUM AS COL1 , LPAD('A',ROWNUM, 'B') AS COL2, RPAD('C',ROWNUM,'D') AS COL3 FROM DUAL CONNECT BY ROWNUM < 101
);

lets start with adding a hint which is not valid.
We will apply INDEX hint with TAB_TEST_HINT not having any index created on it.

EXPLAIN PLAN FOR
SELECT /*+ INDEX(ALIAS1) */ *
FROM TAB_TEST_HINT ALIAS1;

SELECT * FROM table(DBMS_XPLAN.DISPLAY(FORMAT=>'BASIC +HINT_REPORT'));

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
1 - SEL$1 / ALIAS1@SEL$1
U - INDEX(ALIAS1)

It highlight the hint which was detected as part of SQL statement as

“Total hints for statement: 1 “

later marked it as Unused, as we have not created any index on table or HINT it not valid as per SQL.
(U – Unused (1))

Now lets have a hint which is valid and will get applied.
we will consider FULL hint.

EXPLAIN PLAN FOR
SELECT /*+ INDEX(ALIAS1) */ *
FROM TAB_TEST_HINT ALIAS1;

SELECT * FROM table(DBMS_XPLAN.DISPLAY(FORMAT=>'BASIC +HINT_REPORT'));

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1
---------------------------------------------------------------------------
1 - SEL$1 / ALIAS1@SEL$1
- FULL(ALIAS1)

As FULL hint is valid, it is applied and mark as USED. its a great way to confirm applied hint command is accepted.
As hint is never a Request or an option for optimizer to check, if it is valid it is going to get applied.

Now lets try to check a scenario, when hint is incorrect.
we will intentionally passed “FULL2” as hint, which is not a legit hint.

EXPLAIN PLAN FOR
SELECT /*+ FULL2(ALIAS1) */ *
FROM TAB_TEST_HINT ALIAS1;

SELECT * FROM table(DBMS_XPLAN.DISPLAY(FORMAT=>'BASIC +HINT_REPORT'));

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
1 - SEL$1
E - FULL2

it is treated as Syntax error, as we don’t have any hint as FULL2.

now, let try to add a hint which is invalid and other one is not used.
/*+ INDEX(ALIAS1) FULL2(ALIAS1) */

INDEX hint can’t be used as Table does not have any indexes and FULL2 is not a legit hint.
EXPLAIN PLAN FOR
SELECT /*+ INDEX(ALIAS1) FULL2(ALIAS1) */ *
FROM TAB_TEST_HINT ALIAS1;

SELECT * FROM table(DBMS_XPLAN.DISPLAY(FORMAT=>'BASIC +HINT_REPORT'));

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (U - Unused (1), E - Syntax error (1))
---------------------------------------------------------------------------
1 - SEL$1
E - FULL2
1 - SEL$1 / ALIAS1@SEL$1
U - INDEX(ALIAS1)

It identified both the hints and mark it as per its category.
Unused (1) for INDEX(ALIAS1) and E – Syntax error (1) for FULL2(ALIAS1).

please note, if initials hint itself is syntactically incorrect it won’t processed remaining hints.

EXPLAIN PLAN FOR
SELECT /*+ FULL2(ALIAS1) INDEX(ALIAS1) */ *
FROM TAB_TEST_HINT ALIAS1;

SELECT * FROM table(DBMS_XPLAN.DISPLAY(FORMAT=>'BASIC +HINT_REPORT'));
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------
1 - SEL$1
E - FULL2

New Feature of Oracle 19c – Hint report, on itself is a great asset for developer to verify and test hints applied on SQL.

Posted in Oracle Internal, Oracle Performance | Tagged , , | Leave a comment

Generate Script for Foreign key’s in SQL Server.

During conducting one of the Proof Of Concept around “InMemory“, we had to disable foreign key to use Migration wizard provided by SSMS.
Before disable, we had to create script for all foreign key for multiple tables in Database and stored it as backup for later alteration.

As usual process, starting searching on web to get SQL to generate scripts for Foreign Keys. But was able to find only standard TSQL’s or Procedure.
As a Favoritism towards pure SQL, had compiled generation\alter\drop scripts around foreign key in pure SQL form.

Script to Generate Alter command for all Foreign key in SQL Server.

WITH Generator AS
(
SELECT
parent_object_id,
name,
referenced_object_id,
object_id,
is_disabled,
is_not_for_replication,
is_not_trusted,
delete_referential_action,
update_referential_action,
STUFF((SELECT '' + COL_NAME(fk.parent_object_id, fkc.parent_column_id)
from sys.foreign_key_columns fkc
where fkc.constraint_object_id = fk.object_id
ORDER BY fkc.constraint_column_id
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,0,'') parentCols ,
STUFF((SELECT '' + COL_NAME(fk.referenced_object_id, fkc.referenced_column_id)
from sys.foreign_key_columns fkc
where fkc.constraint_object_id = fk.object_id
ORDER BY fkc.constraint_column_id
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,0,'') referCols
FROM sys.foreign_keys as fk
)
SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(parent_object_id)) + '.' + QUOTENAME(OBJECT_NAME(parent_object_id))
+ CASE WHEN is_not_trusted = 0 THEN ' WITH CHECK ' ELSE ' WITH NOCHECK ' END + ' ADD CONSTRAINT ' +
QUOTENAME(name) + ' FOREIGN KEY ( ' + QUOTENAME(parentCols) + ' ) REFERENCES ' +
QUOTENAME(OBJECT_SCHEMA_NAME(referenced_object_id)) + '.' + QUOTENAME( OBJECT_NAME(referenced_object_id))
+ ' (' + QUOTENAME(referCols) + ') '
+ ' ON UPDATE ' + CASE update_referential_action WHEN 0 THEN 'NO ACTION ' WHEN 1 THEN 'CASCADE ' WHEN 2 THEN 'SET NULL '
ELSE 'SET DEFAULT ' END +
' ON DELETE ' + CASE delete_referential_action WHEN 0 THEN 'NO ACTION ' WHEN 1 THEN 'CASCADE '
WHEN 2 THEN 'SET NULL ' ELSE 'SET DEFAULT ' END
+ CASE is_not_for_replication WHEN 1 THEN ' NOT FOR REPLICATION ' ELSE '' END + ';'
AS create_fk_script
FROM Generator
ORDER BY QUOTENAME(OBJECT_SCHEMA_NAME(parent_object_id)) , QUOTENAME(OBJECT_NAME(parent_object_id)) ;

Accumulated Script for Drop , Disable and Enable foreign key as well.

SELECT 'ALTER TABLE [' + SCHEMA_NAME(T.schema_id) + '].[' + T.name + '] DROP CONSTRAINT ' + OBJECT_NAME(FK.constraint_object_id) + ';' AS DROP_CONSTRAINT ,
'ALTER TABLE [' + SCHEMA_NAME(T.schema_id) + '].[' + T.name + '] NOCHECK CONSTRAINT ' + OBJECT_NAME(FK.constraint_object_id) + ';' AS DISABLE_CONSTRAINT ,
'ALTER TABLE [' + SCHEMA_NAME(T.schema_id) + '].[' + T.name + '] CHECK CONSTRAINT ' + OBJECT_NAME(FK.constraint_object_id) + ';' AS ENABLE_CONSTRAINT
FROM sys.foreign_key_columns AS FK INNER JOIN
sys.tables AS T ON FK.parent_object_id = T.object_id INNER JOIN
sys.columns AS C ON FK.parent_object_id = C.object_id AND FK.parent_column_id = C.column_id

It always work like wonder, when done with Pure SQL.
Hope Script will be useful to play with Foreign Key in SQL Server.

Posted in Helper SQL, SQL Server Learning | Tagged , , , , , , , , , | Leave a comment

Adding Default Column to a table in Oracle and SQL Server.

Adding a column to an existing table with default exhibit many question.
For instance
What would be value for that column on existing rows ?
What would happen if new column added is Null?
What would happen if new column added is Not Null?

lets see through in both database Oracle and SQL Server on how addition of new column with a default value is perform.

lets start with SQL Server,

SELECT TAB.COL1, TAB.COL2 INTO DEFAULTCOLTAB
FROM
(
VALUES (1,2),(3,4) ,(5,6)
) TAB(COL1,COL2);

SELECT * FROM DEFAULTCOLTAB;

COL1 COL2
----------- -----------
1 2
3 4
5 6

Adding a column with Default and Not null.

ALTER TABLE DEFAULTCOLTAB ADD COL3 INT NOT NULL DEFAULT 0;

COL1 COL2 COL3
----------- ----------- -----------
1 2 0
3 4 0
5 6 0

When we specify column to be not null and with a default it will update default value for existing rows as well.
now lets add a column with default and null-able.

ALTER TABLE DEFAULTCOLTAB ADD COL4 INT DEFAULT 0;

COL1 COL2 COL3 COL4
----------- ----------- ----------- -----------
1 2 0 NULL
3 4 0 NULL
5 6 0 NULL

When we specify column to be NULL, SQL Server does not update existing rows for newly added column. it is set to NULL and not to Default value.

If in case we want to update, existing rows with default value for null-able column.
we can use “WITH VALUES” clause and update existing rows with default values.

ALTER TABLE DEFAULTCOLTAB ADD COL5 INT DEFAULT 0 WITH VALUES;

COL1 COL2 COL3 COL4 COL5
----------- ----------- ----------- ----------- -----------
1 2 0 NULL 0
3 4 0 NULL 0
5 6 0 NULL 0

Now lets see how it works for Oracle.

CREATE TABLE DEFAULTCOLTAB
AS
SELECT ROWNUM COL1 , ROWNUM +1 COL2 FROM DUAL CONNECT BY ROWNUM <4;

COL1 COL2
---------- ----------
1 2
2 3
3 4

Adding a column with default and not null constraint.
ALTER TABLE DEFAULTCOLTAB ADD COL3 INT DEFAULT(0) NOT NULL ;

COL1 COL2 COL3
---------- ---------- ----------
1 2 0
2 3 0
3 4 0

It behave same as in SQL Server, adding default values for all existing rows.

Adding a column with default and null-able constraint.

ALTER TABLE DEFAULTCOLTAB ADD COL4 INT DEFAULT(0) NULL ;

COL1 COL2 COL3 COL4
---------- ---------- ---------- ----------
1 2 0 0
2 3 0 0
3 4 0 0

here it differ on how same work in SQL Server, though column was null-able it still add default values for all existing rows.
it is not dependent on NULL constraint on column, it update default values for all existing rows.

Posted in Oracle Internal | Leave a comment

Table Alias in Oracle and SQL Server.

Sometimes we assume things to work and not test it, but it surprise you and give you a reason to learn.

We have a product that works on top of both Oracle and SQL Server, hence we are bound to compliant SQL syntax as ANSI or validated on both.
For one of our changes, we had written SQL as below to run on both SQL Server and Oracle.

SELECT COL1 AS COLUMN1 FROM TABLE1 AS TABLE_ALIAS

As it being a simple change,we (Primarily I) only tested on SQL Server and was confident enough that it will work in Oracle.
But is unfold surprise with below Exception in Oracle and works fine in SQL Server.

ORA-00933: SQL command not properly ended
00933. 00000 - "SQL command not properly ended"

lets try to understand with Example in Oracle,

select 'Hi Error' AS Hello from DUAL AS d;

We are using AS keyword to define alias for both Column and Table.
In Oracle, “AS” keyword for Alias only works for Column and not for Table. Didn’t find anything on oracle documentation.

Finally had changed to similar SQL as below.
select 'Hi Error' AS Hello from DUAL d;

HELLO
--------
Hi Error

Learning, Test and don’t assume.!

Posted in Oracle Internal | Tagged , , | Leave a comment

CHAR Datatype and ConCat in Oracle and SQL Server.

Char is a Fixed length data type, provided by database to stored Character Data.
It will allocate fix bytes as per length specified for CHAR datatype and will pad with Space for remaining chunks.

With Variable declare as CHAR(20), Byte allocated will be always 20.

DECLARE @Var CHAR(20)='Hello'
SELECT DATALENGTH(@Var) as DataLen
GO

Output :
DataLen
20

Now lets see how it behave in case of CONCAT for Oracle and SQL Server.

SQL Server:
We are declaring a CHAR Datatype with Length(20), and trying to ConCat.

DECLARE @Var CHAR(20)='Hello-'
SET @Var= @Var+'SQL Server'
SELECT @Var
GO

OUTPUT: "Hello- "

In Case of SQL Server, even after Concat it only displaying previous assign Data due to initial padded space.
With we Declare CHAR(20) and assign it “Hello-“, it is actualy Stored as

“Hello-00000000000000”

0 represent Space and total DataLength is 20.

So any Concat is Beyond length specified for @Var i.e. 20, Hence Concat is not possible.
But it wont throw any Exception or Error, it will whitewash any ConCat.

Oracle:

SET serveroutput ON;
DECLARE
var_1 CHAR(20000) := 'HELLO-';
BEGIN
var_1 := var_1 || 'ORACLE';
dbms_output.put_line(var_1);
END;

Output:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 4
06502. 00000 - "PL/SQL: numeric or value error%s"

Oracle will Throw exception for such Concat with CHAR Datatype.
Unless we Use TRIM, Oracle will throw exception for any Concat with CHAR Datatype.

Posted in SQL Server Learning | Tagged , , , | Leave a comment

SQL SERVER# : Get last possible time for a DAY.

During one of SQL requirement, was trying to form Range of Years for executing a DML in small batches as per YEAR.
But due to missing last possible end time, we were not able to manipulate some records.

For instance check below SQL that we were initially trying to form Year Range,

SELECT DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0) AS START_OD_YEAR,
DATEADD(YY, DATEDIFF(YY, 0, GETDATE()) + 1, -1) AS END_OF_YEAR

Output ::
START_OD_YEAR END_OF_YEAR
----------------------- -----------------------
2017-01-01 00:00:00.000 2017-12-31 00:00:00.000

For END_OF_YEAR we are only getting last day of year, but not till last possible time for end date of year.

Using DATEADD for milliseconds (ms), we can achieve last possible end of time instead for any hard-code.
lets walk trough some example to understand it better.

SELECT DATEADD(MS, -2,GETDATE()) AS MS_2, GETDATE() AS SYS_DATE

MS_2                          SYS_DATE
-----------------------       -----------------------
2017-06-02 16:24:15.577 2017-06-02 16:24:15.580

It should minus 2 milliseconds from current system time but somehow shows as minus by 3 factor. (Not sure why)
For getting last day of year till last time possible, we will follow below pseudo logic.

GET_FIRST_DAY_OF_NEXT_YEAR – (2 MILLISECONDS TO GET LAST TIME POSSIBLE FOR PREVIOUS YEAR)

lets create the SQL to implement same.

SELECT DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0) AS START_OD_YEAR,
DATEADD(MS,-2,DATEADD(YY, DATEDIFF(YY, 0, GETDATE()) + 1, 0)) AS END_OF_YEAR

START_OD_YEAR END_OF_YEAR
----------------------- -----------------------
2017-01-01 00:00:00.000 2017-12-31 23:59:59.997

Hope it was informative!

Posted in Helper SQL, SQL Server Learning | Tagged , , , | Leave a comment

FIXED length character storage and NULL in SQL Server.

Most of us must be aware of CHAR datatype and its fixed length storage pattern.
If we declare CHAR(32) and store only 4 bytes character ‘ABCD’, It will pads up remaining bytes (28 bytes).
Ultimately, whole bytes is consume for serving only 4 length data.

lets understand it better with field work.

CREATE TABLE TESTCHAR
(
COL1 CHAR(32) NULL
);

INSERT INTO TESTCHAR VALUES ('ABCD');
SELECT COL1 , DATALENGTH(COL1) AS SIZE FROM TESTCHAR

COL1 SIZE
------ -----------
ABCD 32

but what happen for NULL data, will it still consume allocated bytes of 32 or follow some other storage pattern.

lets create tables with single column having CHAR and VARCHAR datatype respectively.

DROP TABLE TESTCHAR
CREATE TABLE TESTCHAR
(
COL1 CHAR(32) NULL
);

CREATE TABLE TESTVARCHAR
(
COL1 VARCHAR(32) NULL
);

Now let insert 1,00,000 null records in both the tables.

;WITH ALIAS1(COL1,COL2)
AS
(SELECT NULL COL1 , 1 COL2
UNION ALL
SELECT NULL , COL2+1 FROM ALIAS1 WHERE COL2 <=100000)
INSERT INTO TESTCHAR
SELECT COL1 FROM ALIAS1 Option (maxrecursion 0)

;WITH ALIAS1(COL1,COL2)
AS
(SELECT NULL COL1 , 1 COL2
UNION ALL
SELECT NULL , COL2+1 FROM ALIAS1 WHERE COL2 <=100000)
INSERT INTO TESTVARCHAR
SELECT COL1 FROM ALIAS1 Option (maxrecursion 0)

lets check size for a column having null as data for both tables.

SELECT TOP 1 COL1 , DATALENGTH(COL1) AS SIZE FROM TESTCHAR;
COL1 SIZE
------- -----------
NULL NULL

SELECT TOP 1 EVENT_GROUP_ID AS COL1 , DATALENGTH(EVENT_GROUP_ID) AS SIZE FROM TESTVARCHAR;
COL1 SIZE
-------- -----------
NULL NULL

DATALENGTH funtion does not shows any bytes for NULL storage for both datatype CHAR and VARCHAR.
Lets check table size for both data type using sp_spaceused.

exec sp_spaceused 'TESTCHAR'
exec sp_spaceused 'TESTVARCHAR'

name rows reserved data index_size unused
------------- -------- ---------- -------- ----------- -------
TESTCHAR 100001 4104 KB 4064 KB 8 KB 32 KB

name rows reserved data index_size unused
------------- -------- ---------- -------- ----------- -------
TESTVARCHAR 100001 1096 KB 1088 KB 8 KB 0 KB

clearly table size for only char datatype having null is almost 4 times more then table size having only varchar for null data.
DATALENGTH function was not able to incept same at row level, but definitely overall table size for CHAR is more then VARCHAR.

So, having column with datatype as CHAR and having most of data as NULL, will impose more storage then VARCHAR datatype.
NULL in char consume more disk then NULL in varchar datatype.

Posted in SQL Server Learning | Tagged , , , , | Leave a comment

APPROX_COUNT_DISTINCT in Oracle 12c.

Most of time during performance analysis to understand Stale stats, we usually need to check estimated NDV (Number of Distinct values).
If data-set is huge , it take considerable time to project desired output using traditional COUNT(DISTINCT) appraoch.
For any such analysis we usually need to understand approximate or estimated distribution of columns values to understand SELECTIVITY AND CARDINALITY on any functional predicates.

With Oracle 12c (12.1.0.2), we have a function “APPROX_COUNT_DISTINCT” , which is claim to faster then tradition COUNT(DISTINCT <>) approach to get an idea on NDV.

Let try to understand same with field work!

--Creating a table to 1000,5000,10000 NDV for some columns.
CREATE TABLE TEST_APROX_CNT_DIS AS
(SELECT ROWNUM AS COL1 ,
MOD(ROWNUM ,1000) COL2 , --NDV 1000
MOD(ROWNUM ,5000) COL3 , --NDV 5000
MOD(ROWNUM ,10000) COL4 --NDV 10000
FROM DUAL
CONNECT BY LEVEL < 1000000
);

For different distinct values, we will compare elapsed time with traditional approach.
APPROX_COUNT_DISTINCT, as name implies wont give exact NDV for a column.
It will calculate an approximate values to get an understanding of column selectivity.

Below is compassion of values projected for different distinct values.
COUNT(DISTINCT COL2) APPROX_COUNT_DISTINCT(COL2)
1000                       1012

COUNT(DISTINCT COL3) APPROX_COUNT_DISTINCT(COL3)
5000                                       4835

COUNT(DISTINCT COL4) APPROX_COUNT_DISTINCT(COL4)
10000                                    10050

From performance point of view, APPROX_COUNT_DISTINCT was always faster then usual COUNT (DISTINCT ) approach for checking column NDV.

SELECT COUNT(DISTINCT COL2) FROM TEST_APROX_CNT_DIS;
Elapsed: 00:00:00.091

SELECT APPROX_COUNT_DISTINCT(COL2) FROM TEST_APROX_CNT_DIS;
Elapsed: 00:00:00.055

SELECT COUNT(DISTINCT COL3) FROM TEST_APROX_CNT_DIS;
Elapsed: 00:00:00.105

SELECT APPROX_COUNT_DISTINCT(COL3) FROM TEST_APROX_CNT_DIS;
Elapsed: 00:00:00.059

SELECT COUNT(DISTINCT COL4) FROM TEST_APROX_CNT_DIS;
Elapsed: 00:00:00.105

SELECT APPROX_COUNT_DISTINCT(COL4) FROM TEST_APROX_CNT_DIS;
Elapsed: 00:00:00.055

Let try to compare underlying execution plan for both approach.

COUNT - DISTINCT
------------------------------------------------------------
| Id | Operation | Name | Rows |
------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | VIEW | VW_DAG_0 | 1000 |
| 3 | HASH GROUP BY | | 1000 |
| 4 | TABLE ACCESS FULL| TEST_APROX_CNT_DIS | 999K|
------------------------------------------------------------

APPROX_COUNT_DISTINCT
------------------------------------------------------------
| Id | Operation | Name | Rows |
------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE APPROX| | 1 |
| 2 | TABLE ACCESS FULL | TEST_APROX_CNT_DIS | 999K|
------------------------------------------------------------

With APPROX_COUNT_DISTINCT we got an new Aggregation operation in execution plan “SORT AGGREGATE APPROX”.
As compare to tradition approach, we would need to perform GROUP BY and then AGGREGATION operation to serve COUNT and DISTINCT result.

If for any analysis, we just need an estimate of NDV with acceptable variation from actual values.
APPROX_COUNT_DISTINCT is the function to be used.

Posted in Oracle 12c New Features, Oracle Internal, Oracle Performance | Tagged , , , | Leave a comment