TIMESTAMP to DATE Conversion with Online Redefinition

Online redefinition is a great way to make structural change on "big" tables having "lots of" DML. Using online redefinition, partitioning-nonpartitioning, adding-dropping columns, changing column data types, moving to another tablespace and more can be done with a very small unavailability of the table when compared with direct operations. 

Here are some online redefinition MOS notes which make life easier:
  • Dbms_Redefinition Online Reorganization Of Tables (Doc Id 149564.1)
  • How To Re-Organize A Table Online (Doc Id 177407.1)
  • How To Shrink A Table Using Online Redefinition(Doc Id 1357878.1)
  • How To Compress A Table While It Is Online(Doc Id 1353967.1)
  • How To Move A Table To A New / Different Tablespace While It Is Online
  • How To Convert Long Column To Clob Using Dbms_Redefinition Package (Doc Id 251417.1)
  • Online Redefinition Of Table Using Rowid Option (Doc Id 210407.1)
  • An Example Of A Complex Online Table Redefinition (Dbms_Redefinition) (Doc Id 1358236.1)

My case was to convert a TIMESTAMP column to DATE.
In order to prepare a test environment:

SQL> create table table1 (col1 number, col2 timestamp);
SQL> insert into table1 values (1,systimestamp);
SQL> commit;
SQL> create table table2 (col1 number, col2 date);
 (uname => 'eb'
 ,tname => 'table1'
 ,options_flag => dbms_redefinition.cons_use_rowid);
PL/SQL procedure successfully completed.

Standby Database SCN - x$kcvfh

The case was to roll forward a physical standby with an RMAN SCN incremental backup taken from primary. The standby database was just restored and necessary archived logs was missing somehow (That's another story). It was something i already did in the past so we set to work with my previous notes. Took the backup, copied files to standby server and recovered standby database. But the problem was, RECOVER DATABASE NOREDO statement was doing nothing so media recovery was asking for the same archived logs. 

Cross-checked the steps with Data Guard Concepts and Administration Guide there was nothing we were missing.

And then after the warning of my team-mate, checked the note on MyOracle Support "Steps to perform for Rolling forward a standby database using RMAN Incremental Backup. (Doc ID 836986.1)" and voila! My note and Administration Guide said "check the SCN on the standby database using V$DATABASE" and support note was saying:

"You need to use the 'lowest SCN' from the the 3 queries below"
SQL> select min(fhscn) from x$kcvfh;
SQL> select min(f.fhscn) from x$kcvfh f, v$datafile d where f.hxfil =d.file# and d.enabled != 'READ ONLY';

Again, in my case no media recovery was run on the standby and the control file was not sync with datafile headers.V$DATABASE gives information about the database from the control file and it was not the SCN i need on the incremental backup. I need the minimum SCN information on the datafile headers and that information is on x$kcvfh(Kernel Cache Recovery Component File Headers). Each row returned from this view is populated by reading the header off files on disk. While searching more on this view i realized it gives important information specially on problematic recovery cases. 
Happy recovering...

Auditing on Oracle Database in a Nutshell (11gR2)

In an Oracle Database we can mention following auditing types:
  • Mandatory Auditing
  • Standard Auditing
  • Fine-Grained Auditing
  • SYS Auditing
Mandatory Auditing causes database start-up/shut-down and SYSDBA-SYSOPER login logout information to be written into AUDIT_FILE_DEST. This auditing cannot be turned off and it's always written into operating system directory specified with AUDIT_FILE_DEST.

Standard Auditing is controlled with AUDIT_TRAIL parameter. (NONE,OS,DB,DB_EXTENDED,XML,XML_EXTENDED)
When DB or DB_EXTENDED is used, audit records are written into database (aud$ table). (Not the mandatory auditing records, they're always on OS)
XML options write to operating system in XML format.
When *EXTENDED is used SQLBIND and SQLTEXT information is included in the audit trail. If not used, not included.
By default, Standard Auditing audits SQL statements which use following privileges:
It's possible to audit Statements, Privileges and Objects with Standard Auditing. For example:
  • Privilege auditing: audit select any table; 
  • Statement auditing: audit select table; 
  • Object auditing: audit select on SCOTT.SALARY;
Following views give information about current Standard Auditing configuration in the database:
  • DBA_STMT_AUDIT_OPTS; ==> describes current statements being audited across the system
  • DBA_PRIV_AUDIT_OPTS; ==> describes current system privileges being audited across the system
  • DBA_OBJ_AUDIT_OPTS;   ==> describes auditing options for all objects 
You can use the SQL "AUDIT" statement to set auditing options regardless of the setting of AUDIT_TRAIL parameter. However, Oracle Database does not generate audit records until you enable Standard Auditing using AUDIT_TRAIL parameter.
Auditing to OS offers higher performance when compared with DB. 

Fine-Grained Auditing is used to audit operations like: 
  • Accessing a table outside of normal working hours 
  • Logging in from a particular IP address 
  • Selecting or updating a particular table column
DBMS_FGA package is used to manage Fine-Grained Auditing. DBA_AUDIT_POLICIES view describes all fine-grained auditing policies in the database.
It's not mandatory to enable Standard Auditing in order to use Fine-Grained Auditing or SYS Auditing.
SYS AuditingAUDIT_SYS_OPERATIONS parameter (TRUE/FALSE) enables or disables the auditing of SQL statements that directly issued by users connected with SYSDBA or SYSOPER privileges (SQL statements run from within PL/SQL procedures or functions are not audited). These audit records are written into OS.
It's not mandatory to enable Standard Auditing in order to use SYS Auditing.

Active Data Guard 12c New Features - Oracle Day 2014

Exadata MAA Presentation - Oracle Day 2013 Istanbul

Packt Publishing e-book Promotion

I would like to inform you that, to mark Columbus Day, Packt is offering its entire catalog of eBooks and videos at 50% off. This is te last day of the promotion. If you plan to buy some e-books don't miss this. http://bit.ly/1bqvB29

Discount code: COL50 

The Book is Out!

After one year of hard work, it's very nice to introduce the Oracle Data Guard 11gR2 Administration book written by me and Nassyam Basha. As for me, i can honestly say that it has been a really good book and will help its reader a lot to learn and implement Data Guard. We wanted to prepare a book that starts Data Guard from scratch and covers all important details of it with clear and easy to follow examples and i think we succeeded. Hope readers will think the same.

There is a lot of information we prepared about what this book covers, who this book is for etc. on book's web page. So, I will not mention about those here. But I wanted to write about the following topics, because I think they are important for potential readers (and writers).

- Why the Beginner's Guide Format...

The "Beginner's Guide" format is one of the writing templates of Packt. "Beginner's Guide" doesn't mean to be a "light" book in terms of content. This format was designed to help the "beginner" reader to understand the subject easily with more examples, hands-ons and Q&As. However the book covers all Data Guard topics from beginning to the advanced topics. You can see this when you look at the Table of Contents.

- Why 11gR2...

We of course discussed if the book should be for 12c. However most of the DBAs who will read this book will need to configure Data Guard 11gR2 for their production systems (and the rest will configure in lower releases but not 12c for now). As we all know a new version needs at least 1-2 years to be used on production systems. A new version book would be only informative but not practical for now. On the other side this book is a hands-on book to help DBAs built Data Guard configurations and DBAs will mostly work on 11gR2 for at least several years. It's not certain but we may also upgrade the book for 12c in the meantime.

- Writing a Book...

First of all i have to say that it has been more difficult than i thought to make time to write. Last 12 months have been the busiest time of my life. I had to write chapters at nights and weekends which of course affected my social and family life. Before authoring this book, in 2009 i contributed to “RMAN 11g Backup & Recovery” book from Oracle Press by writing a chapter for that book. So, i should have known what writing a book is like but I think, idea of authoring a book make me forget that :) Preparing the outline, writing the first drafts, editor reviews, technical reviews, technical editor reviews, final review, in the mean time increasing versions of the drafts, writing the preface, writing the Title Information Sheet, review of the preface, review of the Title Information Sheet and so on. At the end one year full of writing :) When i look to my mailbox, the folder containing the e-mails related with the book has 1200 e-mails inside. These are the e-mails that i'm in and i'm sure there are many more that was sent inside Packt team. This may show how busy were the people in this team and how much effort was shown. At the end i'm happy to hold the book in my hands and i can say it's a strange feeling to browse through the book :) Thanks again to everyone in the team from the four corners of the world that made this book possible.

- Reviews...

We received some (honest) reviews from very important Oracle professionals that the book is really good. Our editors and technical reviewers also made very nice comments during the writing phase. (Thanks again to Syed Jaffar Hussain, Michael Seberg, Joel Perez and the Packt editor team for their valuable comments and edits) Seeing these nice feedbacks were the motivation of writing. As i said, i also trust in this book and I’m pretty sure it'll be a good resource to learn Data Guard.

I already thanked at the beginning pages of the book but here i don't want to miss and thank again to my family for their support and my friend Nassyam Basha for all his effort and friendship.

Data Guard Queries

After all those years, my 5 years old post “How To Query Dataguard Status” still has the top visitors of this blog, so I wanted to write a fresh one including newer queries, commands. Here it is:

  • Standby database process status: You can run following query on standby database to see what MRP and RFS processes are doing, which block of which archivelog sequences are being shipped or being applied.

SQL> select process, status, thread#, sequence#, block#, blocks from v$managed_standby ;

--------- ------------ ---------- ---------- ---------- ----------
ARCH      CLOSING               1      69479     932864        261
ARCH      CLOSING               1      69480     928768        670
ARCH      CLOSING               2      75336     933888        654
ARCH      CLOSING               2      78079     930816        842
ARCH      CLOSING               1      69475     943104         79
RFS       IDLE                  0          0          0          0
RFS       RECEIVING             1      69481     688130       1024
MRP0      WAIT_FOR_LOG          2      78080          0          0
RFS       IDLE                  2      78080     873759          3

TROUG - Oracle EBS Day 2013

About "Expert Oracle Database Architecture" Book

I'm reading the "Expert Oracle Database Architecture" book of Thomas Kyte, which is a must read for everyone who is serious on being an Oracle DBA.

Jonathan Lewis already said: "Frankly, if every DBA and developer in the world were made to work carefully through Tom Kyte’s book, I’d probably have to start offering consultancy services to SQL Server users because the number of clients needing Oracle consultancy would drop dramatically." in the foreword of the book. Still, i wanted to share some paragraphs from the "Data Files" part of the book. Following are taken out of only three pages of the book. I'm sharing this to show how the book is full of information and encourage you to immediately buy one and read.

You will find many CREATE statements that create multisegment objects. The confusion lies in the fact that a single CREATE statement may ultimately create objects that consist of zero, one, or more segments! For example, CREATE TABLE T ( x int primary key, y clob ) will create four segments: one for the TABLE T, one for the index that will be created in support of the primary key, and two for the CLOB (one segment for the CLOB is the LOB index and the other segment is the LOB data itself). On the other hand, CREATE TABLE T ( x int, y date ) cluster MY_CLUSTER will create zero segments (the cluster is the segment in this case).

Extents vary in size from one Oracle data block to 2GB. 11g Release 2 has introduced the concept of a “deferred” segment—a segment that will not immediately allocate an extent-, so in that release and going forward, a segment might defer allocating its initial extent until data is inserted into it.

Here’s a little-known fact: the default block size for a database does not have to be a power of two. Powers of two are just a commonly used convention. You can, in fact, create a database with a 5KB, 7KB, or nKB block size, where n is between 2KB and 32KB.

Most blocks, regardless of their size, have the same general format, which looks something like:

|Header           | -> type of block (table block, index block, and so on), 
|                  |   transaction information when relevant regarding active and 
|                 |   past transactions on the block; and the address (location) 
|                 |   of the block on the disk.
|------------------|   The next two block components are found on the HEAP-                 |                  |   organized tables.      
|Table Directory   | -> The table directory, if present, contains information  
|                 |   about the tables that store rows in this block
|Row Directory     | -> The row directory contains information describing the 
|                 |   rows that are to be found on the block.
|                  | -> The remaining 2 pieces of the block are straightforward
|Free Space        |   there may be free space on a block, and
|                  |   then there will generally be used space that is 
|                  |   currently storing data.
|                  |
|Data              |
|                  |
|Tail              |

Exceptions to this format include LOB segment blocks and hybrid columnar compressed blocks in Exadata storage, for example, but the vast majority of blocks in your database will resemble the format in Figure