top of page

Results found for ""

  • Shell Script to Accept User Input

    Shell Script to Accept User Input Linux bash script allows you to accept user inputs and read what user types on the screen. This is achieved by Linux read command within the script. In the below script, we will be asking user their favorite color and prompt the user with the input they have typed on the screen. vi user_input_fav_col.sh # !/bin/bash echo -n "What is your favourite colour :" read answer echo "oh! you like $answer!" Here is the sample output of the above script More ways to use User Input Bash Script Ask user to input their name & output "Welcome " + username Ask user to input their first & last name. Output - "Hello " + first + last name Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Oracle Flashback

    Oracle Flashback Oracle flashback allows you to move database back in time. You can use flashback technology to move entire database or a particular table inside database. Flashback Table Before Drop Flashback Table Flashback Database Enable Flashback Create Sample User Flashback Database to SCN or Timestamp Note: only for flashback database activity, you must enable flashback database. For all other flashback activities, you do not need to enable flashback database Flashback Table Before Drop You can flashback a dropped table from recyclebin using flashback table command SHOW RECYCLEBIN; FLASHBACK TABLE "BIN$gk3lsj/3akk5hg3j2lkl5j3d==$0" TO BEFORE DROP ; or FLASHBACK TABLE SCOTT . FLASH_EMP TO BEFORE DROP ; You can even rename table while flashing it back from recyclebin FLASHBACK TABLE SCOTT.FLASH_EMP TO BEFORE DROP RENAME TO NEW_EMP; Note: Recyclebin must be enabled to use flashback table before drop Flashback Table You can flashback table to a particular SCN or time in the past. Before you can flashback table, you must enable row movement ALTER TABLE hr.employees ENABLE ROW MOVEMENT; Now you are ready to flashback table to SCN or timestamp FLASHBACK TABLE EMP TO SCN < scn_no > ; F LASHBACK TABLE HR.EMPLOYEES TO TIMESTAMP TO_TIMESTAMP(‘2016-05-12 18:30:00’, ‘YYYY-MM-DD HH24:MI:SS’); Note: for flashback table, enabling FLASHBACK DATABASE is not required at all Flashback Database We can move an entire database back in time to a particular SCN or a timestamp. Flashback Database must be already enabled on the database to user this feature. Enable Flashback Database Make sure DB_RECOVERY_FILE_DEST parameter is set. This is the location where Oracle will store flashback logs SQL > alter system set db_recovery_file_dest = '/u02/flash_logs' SCOPE = spfile ; Set DB_RECOVERY_FILE_DEST parameter as per requirement SQL > alter system set db_recovery_file_dest_size =50 G SCOPE = spfile ; Set the DB_FLASHBACK_RETENTION_TARGET parameter which specifies the upper limit (in minutes) on how far back in time the database can be flashed back SQL > alter system set db_flashback_retention_target =2880 ; Enable flashback database which requires database bounce SQL > shutdown immediate ; SQL > startup mount ; SQL > alter database flashback on ; SQL > alter database open ; SQL > select flashback_on from v$database ; Create Sample User Let us capture the database SCN number before we create a user SQL > SELECT current_scn , SYSTIMESTAMP FROM v$database ; Current SCN : 2703232 Create a user FLASH_USR and try to connect the database with same user SQL > create user flash_usr identified by flash_usr ; SQL > grant connect , resource to flash_usr ; SQL > conn flash_usr / flash_usr ; Flashback Database to SCN or Timestamp Assume that the user has been created by mistake and you want to flashback database to the SCN just before the user creation. Shutdown DB and startup mount SQL > shut immediate ; SQL > startup mount ; Flashback database to SCN before user creation and open database with resetlogs SQL > Flashback database to scn 2703232 ; SQL > Alter database open resetlogs ; You can flashback database to particular timestamp too F LASHBACK DATABASE TO TIMESTAMP TO_TIMESTAMP(‘2016-05-12 18:30:00’, ‘YYYY-MM-DD HH24:MI:SS’); Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Shell Script to Find Text Files Containing Word 'user_defind'

    Shell Script to Find Text Files Containing Word 'user_defind' Linux shell script allows user to find the word containing in files in a directory. In order to achieve this, we are using the find and grep command. Here the user will be asked to enter the text to search. vi search_in_files.sh # !/bin/bash echo "Type the word to find text files" read fn find . -exec grep -l "$fn" {} \; Here is the sample output of the above script More ways to use find and grep You can select only those lines containing matches that form whole words using the -w option. You can also use this to search for two or more words - egrep -w -R 'word1|word2' ~/projects/ Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Oracle Table and Index Partitions

    Oracle Table and Index Partitions Partitioning allows tables and indexes to be divided into small parts for easy management inside database. You can create small portions of very large tables inside database which makes data access more simple and manageable. Create Table Partitions Create Partitioned Index Partitioning Existing Non-Partition Table Add Partition to Existing Partition Table Crate Table Partitions You can use below query to create sample table with time range partition. Please note that each partition is placed under different tablespace inside the database CREATE TABLE time_range_sales ( prod_id NUMBER(6) primary key , cust_id NUMBER , time_id DATE , quantity_sold NUMBER(3) , amount_sold NUMBER(10,2) ) PARTITION BY RANGE (time_id) (PARTITION SALES_1998 VALUES LESS THAN (TO_DATE('01-JAN-1999','DD-MON-YYYY')) tablespace users, PARTITION SALES_1999 VALUES LESS THAN (TO_DATE('01-JAN-2000','DD-MON-YYYY')) tablespace examples, PARTITION SALES_2000 VALUES LESS THAN (TO_DATE('01-JAN-2001','DD-MON-YYYY')) tablespace xyz, PARTITION SALES_2001 VALUES LESS THAN (MAXVALUE) tablespace abc ); You can query user_tab_partitions to get details about the table, partition name, number of rows in each partition and more COLUMN high_value FORMAT A20 SELECT table_name, partition_name, high_value, num_rows FROM user_tab_partitions ORDER BY table_name, partition_name; You can query individual table partition to get records only from the specific partition select * from TIME_RANGE_SALES partition (SALES_1998); select * from TIME_RANGE_SALES partition (SALES_1999); select * from TIME_RANGE_SALES partition (SALES_2000); select * from TIME_RANGE_SALES partition (SALES_2001); You can always query a partitioned table like a normal table too. In this case, the output will be from all the partitions inside the table select * from TIME_RANGE_SALES; Create Partitioned Index You can create two types of indexes on a partition table: Local Index and Global Index You will be using LOCAL keyword in CREATE INDEX statement to create a local index CREATE INDEX time_range_sales_indx ON TIME_RANGE_SALES(time_id) LOCAL (PARTITION year_1 TABLESPACE users, PARTITION year_2 TABLESPACE users, PARTITION year_3 TABLESPACE users, PARTITION year_4 TABLESPACE users); To create a global index, you will be using GLOBAL keyword with CREATE INDEX CREATE INDEX glob_time_range_sales ON TIME_RANGE_SALES(time_id) GLOBAL PARTITION BY RANGE (time_id) (PARTITION glob_y1 VALUES LESS THAN (TO_DATE('01-JAN-1999','DD-MON-YYYY')), PARTITION glob_y2 VALUES LESS THAN (TO_DATE('01-JAN-2000','DD-MON-YYYY')), PARTITION glob_y3 VALUES LESS THAN (TO_DATE('01-JAN-2001','DD-MON-YYYY')), PARTITION glob_y4 VALUES LESS THAN (MAXVALUE)); Check Partition Indexes select INDEX_NAME, PARTITION_NAME from user_ind_partitions; Partitioning Existing Non-Partition Table Oracle 12cR2 allows you to perform an online conversion of non-partitioned table to a partitioned table without any impact to the end users access to the table data. The previous Oracle releases didn't have the option of creating partitions on an existing table and involved several steps to achieve this. Online Partition Existing Table (12cR2 onwards) Export & Import Partition Table Split and Exchange Partition We will be looking at all the methods of creating partitions for an existing table starting with the latest one. Online Partition Existing Table (12cR2 onwards) Oracle 12cR2 requires only a single command to covert non-partitioned table into a partitioned table. Assume that you would like to create year wise range partition on a column which contains date and time ALTER TABLE hr.tab1 MODIFY PARTITION BY RANGE (transaction_date) ( PARTITION TRX_2018 VALUES LESS THAN (TIMESTAMP' 2018-12-31 23:59:59.000000000+00:00') tablespace TBS_01, PARTITION TRX_2019 VALUES LESS THAN (TIMESTAMP' 2019-12-31 23:59:59.000000000+00:00') tablespace TBS_01, PARTITION TRX_2020 VALUES LESS THAN (TIMESTAMP' 2020-12-31 23:59:59.000000000+00:00') tablespace TBS_01, PARTITION TRX_2021 VALUES LESS THAN (TIMESTAMP' 2021-12-31 23:59:59.000000000+00:00') tablespace TBS_01 ) ONLINE; Notice the ONLINE clause at the end of the query. Let's gather statistics EXEC DBMS_STATS.gather_table_stats('HR', 'TAB1'); Check table partitions SELECT table_name, partition_name, num_rows FROM dba_tab_partitions WHERE table_name='TAB1' ORDER BY 1,2; The index on transaction_date column will be automatically converted to locally partitioned index Check index partitions are automatically created SELECT index_name, table_name, partitioned, status FROM dba_indexes where table_name='TAB1' ORDER BY 1; SELECT index_name, partition_name, status, NUM_ROWS FROM dba_ind_partitions WHERE idex_name='&index_name' ORDER BY 1,2; Export & Import Partition Table First check if a table is partitioned or not using below query SQL> select TABLE_NAME, CLUSTER_NAME, PARTITIONED from dba_tables where table_name='SALES_NEW'; SQL> select * from sales_new; PROD_ID CUST_ID TIME_ID QUANTITY_SOLD AMOUNT_SOLD ---------- ---------- --------- ------------- ----------- 116 11393 05-JUN-99 1 100 40 100530 30-NOV-98 1 200 118 133 06-JUN-01 2 300 36 4523 27-JAN-99 1 500 125 9417 04-FEB-98 1 600 30 170 23-FEB-01 1 700 24 11899 26-JAN-99 1 800 45 9491 28-AUG-98 1 100 133 9450 01-DEC-00 1 400 35 2606 17-FEB-00 1 900 Export the table using Oracle exp utility exp file=sales_new.exp tables=sales_new [oracle@ahmed ~]$ ls -lrt total 20 drwxr-xr-x 3 oracle oinstall 4096 Mar 10 13:21 oradiag_oracle -rw-r--r-- 1 oracle oinstall 16384 Mar 11 03:20 sales_new.exp Rename The Original table SQL> rename sales_new to sales_new_bkp; Create New Partition Table CREATE TABLE sales_test ( prod_id NUMBER(6) primary key , cust_id NUMBER , time_id DATE , quantity_sold NUMBER(3) , amount_sold NUMBER(10,2) ) PARTITION BY RANGE (time_id) (PARTITION SALES_1998 VALUES LESS THAN (TO_DATE('01-JAN-1999','DD-MON-YYYY')), PARTITION SALES_1999 VALUES LESS THAN (TO_DATE('01-JAN-2000','DD-MON-YYYY')), PARTITION SALES_2000 VALUES LESS THAN (TO_DATE('01-JAN-2001','DD-MON-YYYY')), PARTITION SALES_2001 VALUES LESS THAN (MAXVALUE) ); Import Data Into New Partition Table imp file=sales_new.exp tables=sales_new ignore=y Drop The Old Table DROP TABLE SALES_NEW_BKP; Query Partitions From the New Partition Table select * from SALES_NEW partition (SALES_1998); select * from SALES_NEW partition (SALES_1999); select * from SALES_NEW partition (SALES_2000); select * from SALES_NEW partition (SALES_2001); SELECT * FROM SALES_NEW; Split & Exchange Partition Create a test table for this activity and make sure its not a partition table CREATE TABLE my_table ( id NUMBER, description VARCHAR2(50) ); INSERT INTO my_table (id, description) VALUES (1, 'One'); INSERT INTO my_table (id, description) VALUES (2, 'Two'); INSERT INTO my_table (id, description) VALUES (3, 'Three'); INSERT INTO my_table (id, description) VALUES (4, 'Four'); COMMIT; Now create another single full partition table with only one partition to contain whole table CREATE TABLE my_table_2 ( id NUMBER, description VARCHAR2(50) ) PARTITION BY RANGE (id) (PARTITION my_table_part VALUES LESS THAN (MAXVALUE)); Switch original table segment with partition table segment ALTER TABLE my_table_2 EXCHANGE PARTITION my_table_part WITH TABLE my_table WITHOUT VALIDATION; Drop original table and rename partition table DROP TABLE my_table; RENAME my_table_2 TO my_table; Finally, we can split the new partition table into Multiple partitions ALTER TABLE my_table SPLIT PARTITION my_table_part AT (3) INTO (PARTITION my_table_part_1, PARTITION my_table_part_2); Check partition details via below query SELECT table_name, partition_name, high_value, num_rows FROM dba_tab_partitions ORDER BY table_name, partition_name; Add Partition to Existing Partition Table To add a new partition to an existing partitioned table, use below command ALTER TABLE SALES_NEW ADD PARTITION sales_2002 values less than (TO_DATE('01-JAN-2003','DD-MON-YYYY')) tablespace TBS_01; Run below command to check high value of each partition SELECT partition_name, high_value FROM ALL_TAB_PARTITIONS; Further Read Partition Table with DBMS_REDEFINITION Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • MS SQL Server | DBA Genesis Support

    MS SQL Server MS SQL Server to Oracle Replication Using Golden Gate In this project, we will perform single table DML replication from MS SQL server (on windows 10) to Oracle database (on Linux) using...

  • Golden Gate Memory Usage

    Golden Gate Memory Usage Each and every Golden Gate process takes approx 25 – 50 MB of memory or sometimes more depending upon the transactions. Here is one script which will help you find the total memory usage on the source and target server. This script is written for Linux. Script to Find Golden Gate Memory Usage Source server output Target server output Script to Find Golden Gate Memory Usage #!/bin/bash ############################### # determine the OS type ############################### OSNAME=`uname` case "$OSNAME" in "SunOS") echo "OSNAME = $OSNAME" ;; "Linux") echo "OSNAME = $OSNAME" ;; "*") echo "This script has not been verified on $OSNAME" exit 1 ;; esac ############################### # set the temp file ############################### TMPFILE=/tmp/pmem.tmp if [ -f $TMPFILE ] then rm -f $TMPFILE fi ################################ # loop over the gg process types ################################ PROCESSES="extract replicat" for PROCESS in $PROCESSES do FLAG="" FLAG=`ps -ef | grep $PROCESS` if [ -z "FLAG" ] then echo "No $PROCESS processes found" else echo echo "#####################################" echo "# Individual $PROCESS Process Usage #" echo "#####################################" case "$OSNAME" in "Linux") ps -C $PROCESS -O rss > $TMPFILE cat $TMPFILE | grep $PROCESS | awk '{print $2/1024, "MB", $12}' | sort -k 2 ;; "SunOS") ps -efo vsz,uid,pid,ppid,pcpu,args | grep -v grep | grep $PROCESS > $TMPFILE cat $TMPFILE | grep $PROCESS | awk '{print $1/1024, "MB", $8}' | sort -k 2 ;; "*") echo "This script has not been verified on $OSNAME" exit 1 ;; esac rm -f $TMPFILE echo echo "#####################################" echo "# Total $PROCESS Process Usage #" echo "#####################################" case "$OSNAME" in "Linux") ps -C $PROCESS -O rss > $TMPFILE cat $TMPFILE | grep $PROCESS | awk '{count ++; sum=sum+$2; } END \ { print "Number of processes =",count; \ print "AVG Memory usage/process =",sum/1024/count, "MB"; \ print "Total memory usage =", sum/1024, " MB"}' ;; "SunOS") ps -efo vsz,uid,pid,ppid,pcpu,comm | grep -v grep | grep $PROCESS > $TMPFILE cat $TMPFILE | awk '{count ++; sum=sum+$1; } END \ { print "Number of processes =",count; \ print "AVG Memory usage/process =",sum/1024/count, "MB"; \ print "Total memory usage =", sum/1024, " MB"}' ;; "*") echo "This script has not been verified on $OSNAME" exit 1 ;; esac rm -f $TMPFILE fi done exit Source server output [oracle@ggprod gg]$ ./gg_memory_usage.sh OSNAME = Linux ##################################### # Individual extract Process Usage # ##################################### 34.5703 MB DP1 49.1172 MB EXT1 ##################################### # Total extract Process Usage # ##################################### Number of processes = 2 AVG Memory usage/process = 41.8438 MB Total memory usage = 83.6875 MB ##################################### # Individual replicat Process Usage # ##################################### ##################################### # Total replicat Process Usage # ##################################### Number of processes = awk: cmd. line:2: fatal: division by zero attempted Ignore the error at the end as there are not replicate details available on source server. Target server output The same script can be run on target server and it will give you the details of replicat [oracle@ggdev gg]$ ./gg_memory_usage.sh OSNAME = Linux ##################################### # Individual extract Process Usage # ##################################### ##################################### # Total extract Process Usage # ##################################### Number of processes = awk: cmd. line:2: fatal: division by zero attempted ##################################### # Individual replicat Process Usage # ##################################### 34.3008 MB REP1 ##################################### # Total replicat Process Usage # ##################################### Number of processes = 1 AVG Memory usage/process = 34.3008 MB Total memory usage = 34.3008 MB Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • RMAN Duplicate Database PITR

    RMAN Duplicate Database PITR RMAN active database duplication will perform database cloning from live database. It does not use RMAN backups to clone database. With RMAN duplicate command, you can use a previously taken backup and perform a Point-in-time clone on a target server. Prerequisites RMAN Duplicate PITR with Same SID RMAN Duplicate PITR with different SID Prerequisites Before you begin with this activity, make sure below items are addressed Source database is running in Archive log mode Check for space requirements on target server Servers are able to ping each other RMAN Duplicate PITR with Same SID In this section we will be duplicating database using RMAN where the source and target SID will be same. Our source SID is prod and the clone database SID will also be prod. Also note, the database file locations on target server will be same as our source server. I would like to perform a PITR clone (time 07:00 am) of prod database using the backup taken at 06:52 am. Configure Listener on clone server to accept incoming connections LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = clone.dbagenesis.com)(PORT = 1521)) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (ORACLE_HOME = /u01/app/oracle/product/12.2.0/db_home) (SID_NAME = prod) ) ) Start the listener on the clone server lsnrctl start listener Configure tns entries on the prod server to connect clone server prod_clone = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = clone.dbagenesis.com)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = prod) ) ) Make sure to test the tnsping is working from prod to clone server tnsping prod_clone On prod server, create pfile from spfile and copy to clone server create pfile from spfile; cd $ORACLE_HOME/dbs scp initprod.ora oracle@clone.dbagenesis.com:$ORACLE_HOME/dbs Also copy the password file from prod to clone server scp orapwprod oracle@clone.dbagenesis.com:$ORACLE_HOME/dbs If no password file exists, create one via the below command and then copy orapwd file=$ORACLE_HOME/dbs/orapwprod force=y On the clone server, open the pfile that you copied from prod server and create directory locations cat $ORACLE_HOME/dbs/initprod.ora In my case I had to create below two main locations mkdir -p mkdir -p For PITR cloning, we will search for the backup which was taken before PITR time and then find the locations of the backup pieces. In my case, above backup is taken at 06:52 which is before the target PITR 07:00 am. I will be using above particular backups to perform the cloning. Let us check each backup tag and copy the backup pieces to clone server under same location as source. We will start with first backup tag Create the same backup pieces location on clone server as you see under Piece Name mkdir -p /u01/FRA/PROD/backupset/2021_01_24 Copy the backup pieces from source to clone server scp /u01/FRA/PROD/backupset/2021_01_24/o1_mf_annnn_TAG20210124T065238_j0t69pdz_.bkp oracle@clone.dbagenesis.com:/u01/FRA/PROD/backupset/2021_01_24/ o1_mf_annnn_TAG20210124T065238_j0t69pdz_.bkp Lets check the second backup tag details We already have created the backup piece location on clone server in the previous step, just copy the backup piece to target server scp /u01/FRA/PROD/backupset/2021_01_24/o1_mf_nnndf_TAG20210124T065239_j0t69qrh_.bkp oracle@clone.dbagenesis.com:/u01/FRA/PROD/backupset/2021_01_24/o1_mf_nnndf_TAG20210124T065239_j0t69qrh_.bkp Lets check the third backup tag We already have created the backup piece location on clone server, just copy the backup piece to target server scp /u01/FRA/PROD/backupset/2021_01_24/o1_mf_annnn_TAG20210124T065242_j0t69tyh_.bkp oracle@clone.dbagenesis.com:/u01/FRA/PROD/backupset/2021_01_24/o1_mf_annnn_TAG20210124T065242_j0t69tyh_.bkp Lets check the fourth backup tag Create the same backup pieces location on clone server as you see under Piece Name mkdir -p /u01/FRA/PROD/autobackup/2021_01_24 Copy the backup piece from source to clone server scp /u01/FRA/PROD/autobackup/2021_01_24/o1_mf_s_1062658364_j0t69w43_.bkp oracle@clone.dbagenesis.com:/u01/FRA/PROD/autobackup/2021_01_24/o1_mf_s_1062658364_j0t69w43_.bkp Find all the archivelogs needed for recovery by searching for archivelogs generated after backup was triggered and till PITR and copy to clone server set lines 999; col name for a60; select first_time, next_time, name from v$archived_log; Take the clone database to nomount mode export ORACLE_SID=prod sqlplus / as sysdba startup nomount; exit; Start RMAN at source and perform PITR cloning rman target sys auxiliary sys@prod_clone --> give same sys password for both target and auxiliary duplicate target database to 'prod' until time "TO_DATE('2021-01-24 07:00:00', 'YYYY-MM-DD HH24:MI:SS')" nofilenamecheck; RMAN Duplicate PITR with different SID To perform PITR to a different SID with different file locations, there are few more added steps. In this section, we will be cloning prod database to clone SID. Configure Listener on clone server to accept incoming connections LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = clone.dbagenesis.com)(PORT = 1521)) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (ORACLE_HOME = /u01/app/oracle/product/12.2.0/db_home) (SID_NAME = clone) ) ) Start the listener lsnrctl start listener Configure tns entries on the prod server clone = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = clone.dbagenesis.com)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = clone) ) ) Make sure to test the tnsping is working from prod to clone server tnsping clone On prod server, create pfile from spfile and copy to clone server SQL> create pfile from spfile; cd $ORACLE_HOME/dbs scp initprod.ora oracle@clone.dbagenesis.com:$ORACLE_HOME/dbs/initclone.ora Also, copy the password file from prod to clone server scp orapwprod oracle@clone.dbagenesis.com:$ORACLE_HOME/dbs/orapwclone If no password file exists, create one via the below command and then copy orapwd file=$ORACLE_HOME/dbs/orapwclone force=y On the clone server, open the pfile that you copied from prod server. Replace prod with clone SID vi $ORACLE_HOME/dbs/initclone.ora :%s/prod/clone --> replace prod with clone Add below two parameters to change data files and log files locations while cloning the database *.db_file_name_convert='/u01/app/oracle/oradata/PROD','/u01/app/oracle/oradata/clone' *.log_file_name_convert='/u01/app/oracle/oradata/PROD','/u01/app/oracle/oradata/clone' Save the pfile & exit. Create respective directories from the clone pfile. In my case, I had to create below directories mkdir -p mkdir -p mkdir -p mkdir -p Make the backup files from the source database available to the destination server cd /u01/app/oracle/fast_recovery_area/PROD/ scp -r archivelog oracle@clone.dbagenesis.com:/u01/app/oracle/fast_recovery_area/PROD/ scp -r autobackup oracle@clone.dbagenesis.com:/u01/app/oracle/fast_recovery_area/PROD/ scp -r backupset oracle@clone.dbagenesis.com:/u01/app/oracle/fast_recovery_area/PROD/ Take the clone server database to nomount stage export ORACLE_SID=clone sqlplus / as sysdba startup nomount; exit; Start RMAN at source and connect to clone database rman target sys auxiliary sys@clone --> give same sys password for both target and auxiliary Duplicate target database to "clone" until time "TO_DATE('2021-01-07 14:27:17', 'YYYY-MM-DD HH24:MI:SS')" nofilenamecheck; Further Read Duplicating a Database Configure RMAN channels for database duplication Duplicating Database - Advance Topics Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Oracle Golden Gate Functions

    Oracle Golden Gate Functions Oracle Golden Gate Functions are a set of predefined functions that can be used to manipulate data during replication. These functions can be used to perform a variety of tasks, such as Converting data types, Formatting data, Performing mathematical calculations, Testing conditions and much more. Consider below source and target tables Client Requirement If HOURLY_PRICE is above 75 then EMP_LEVEL = Senior, if HOURLY_PRICE is above 50 then EMP_LEVEL = Junior else EMP_LEVEL = fresher Give a 10% bonus to all employees and set target BONUS = 10% of SALARY If the source DEPT_NAME is missing, set it default IT TRNX on target must contain the exact time at which transaction has been executed Combine the FIRST_NAME and LAST_NAME on target as FULL_NAME Convert target FULL_NAME into upper case Assumptions: Map all other columns with matching names by default Setup Replication With Golden Gate Functions Create GG_FUN table on proddb On Proddb: ========== Conn fox/fox CREATE TABLE gg_fun ( EMPID NUMBER PRIMARY KEY, FIRST_NAME Varchar2(10), LAST_NAME Varchar2(10), SALARY Number, HOURLY_PRICE Number, DEPT_NAME Varchar2(4), PHONE Number, COUNTRY Varchar2(20) ); Create GG_FUN_TEST on devdb On Devdb: ========= Conn tom/tom CREATE TABLE gg_fun_test ( EMP_ID Number PRIMARY KEY, FULL_NAME Varchar2(20), SALARY Number, BONUS Number, EMP_LEVEL Varchar2(20), DEPT_NAME Varchar2(4), PHONE Varchar2(10), TRNX Date, COUNTRY Varchar2(20) ); Connect to database via Golden Gate and add table level supplemental logging On proddb: ========== cd $GG_HOME ./ggsci GGSCI> dblogin userid ogg, password ogg Successfully logged into database. GGSCI> add trandata FOX.GG_FUN Logging of supplemental redo data enabled for table FOX.GG_FUN. TRANDATA for scheduling columns has been added on table 'FOX.GG_FUN'. Create Source Definitions File First, we need to create a parameter file for DEFGEN utility. You can create it via GG prompt or manually via vi editor On proddb: ========== GGSCI> edit params defgen3 DEFSFILE ./dirdef/FoxGG_FUN.def USERID ogg PASSWORD ogg TABLE FOX.GG_FUN; Exit the GG prompt and initiate defgen utility to generate definitions file On proddb: ========== cd $GG_HOME ./defgen paramfile ./dirprm/defgen3.prm Copy the definitions file on target server under $GG_HOME/dirdef location Create extract on source Copy the definitions file on target server under $GG_HOME/dirdef location Create data pump process GGSCI> Add extract PFOXGND, EXTTRAILSOURCE ./dirdat/tn GGSCI> Add rmttrail /u01/app/oracle/product/gg/dirdat/rn, extract PFOXGND GGSCI> edit param PFOXGND EXTRACT PFOXGND USERID ogg, PASSWORD ogg RMTHOST ggdev, MGRPORT 7809 RMTTRAIL /u01/app/oracle/product/gg/dirdat/rn TABLE FOX.GG_FUN; Let us create replicat process as per the client requirement on target On Devdb: ========== GGSCI> dblogin userid ogg, password ogg GGSCI> add replicat DTOMGNR, integrated exttrail /u01/app/oracle/product/gg/dirdat/rn GGSCI> edit param DTOMGNR REPLICAT DTOMGNR USERID ogg, PASSWORD ogg SOURCEDEFS ./dirdef/FoxGG_FUN.def MAP fox.gg_fun TARGET tom.gg_fun_test, & COLMAP (EMP_ID=EMPID, FULL_NAME=@STRUP(@STRCAT(FIRST_NAME,' ',LAST_NAME)), SALARY=SALARY, BONUS=@COMPUTE(0.1*SALARY), EMP_LEVEL=@EVAL(HOURLY_PRICE > 75, 'SENIOR', HOURLY_RATE > 50, 'JUNIOR', 'FRESHER'), Dept_name=@IF(@COLTEST(DEPT_NAME, NULL, MISSING),'IT',DEPT_NAME) , Phone=@NUMSTR(PHONE), Trnx=@DATENOW(), Country=country ); Start the Extract, Pump and Replicat process On proddb: ========== GGSCI> start PFOXGNE GGSCI> start PFOXGND On devdb: ========= GGSCI> start DTOMGNR Let us test our replication On proddb: ========== INSERT INTO gg_fun VALUES (1001,'John', 'Doe', 2000, 70, 'HR', 987654321, 'USA'); INSERT INTO gg_fun VALUES (1002,'Daniel', 'David', 5000, 110, NULL, 4444444444, 'UK'); INSERT INTO gg_fun VALUES (1003,'Robin', 'Lee', 1000, 10, 'FIN', 123123212, 'CAN'); COMMIT; Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Convert Physical Standby into Active Data Guard

    Convert Physical Standby into Active Data Guard Active Data Guard is a feature of Oracle Database that allows the physical standby database to be open for read-only and reporting operations while continuously applying changes from the primary database in real-time. Enable Active Data Guard Verify MRP is Running Test Active Data Guard Revert Back to Physical Standby In very simple terms, when you open Physical Standby in read only mode, it is known as Active Data Guard. But, Active Data Guard needs license for and you must check with Oracle for same before implementing it. Enable Active Data Guard Let us Open Physical Standby and test active data guard On standby: =========== SQL> alter database recover managed standby database cancel; SQL> alter database open; SQL> select name, open_mode, database_role from v$database; SQL> alter database recover managed standby database disconnect; Verify MRP is Running Use below query to check if MRP is running in the background or not On standby (active data guard): =============================== SQL> select process, status, sequence# from v$managed_standby; Test Active Data Guard As our active data guard is open for read only queries and background recover is active, let us create a table on primary and see if it is reflected on standby On primary: =========== SQL> create table test(sno number(2),sname varchar2(10)); SQL> insert into test values (1, 'John '); SQL> commit; SQL> alter system switch logfile; On standby: =========== SQL> select * from test; Revert back to physical standby If you want to convert active data guard back to physical standby, follow below commands On standby: =========== SQL> alter database recover managed standby database cancel; SQL> shutdown immediate; SQL> startup mount; SQL> select name, open_mode, database_role from v$database; SQL> alter database recover managed standby database disconnect; SQL> select process, status, sequence# from v$managed_standby; Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Golden Gate | DBA Genesis Support

    Golden Gate Oracle Golden Gate Interview Questions Here are some of the frequently asked Oracle Golden Gate interview questions and answers. Enjoy! Q. What is the significance of Oracle... Golden Gate Performance Tuning Golden Gate Performance Tuning is the process of optimizing the performance of Oracle Golden Gate replication. It can be done by... Golden Gate Memory Usage Each and every Golden Gate process takes approx 25 – 50 MB of memory or sometimes more depending upon the transactions. Here is one... MS SQL Server to Oracle Replication Using Golden Gate In this project, we will perform single table DML replication from MS SQL server (on windows 10) to Oracle database (on Linux) using... Maintain Source Table Transaction History Using Golden Gate Sometimes you want to have a history table that can store all the transactions performed on a source table. This kind of history tables... Oracle Golden Gate Functions Oracle Golden Gate Functions are a set of predefined functions that can be used to manipulate data during replication. These functions... Golden Gate Replication When Table Structure Is Different – COLMAP In this article, we will be using Golden Gate COLMAP parameter for mapping table columns where the tables structure is different. Get... Oracle to MYSQL Replication Using Golden Gate In this article, we will perform single table DML replication from Oracle database (Linux) to MySQL database (Linux) using Oracle Golden... Oracle Golden Gate DDL Replication Data Definition Language (DDL) replication is the process of replicating database schema changes, such as creating, dropping, and... Configure Golden Gate Initial Load and Change Sync Oracle Golden Gate Initial load is a process of extracting data records from the source database and loading those records onto the... Upgrade Oracle Golden Gate 12cR1 to 12cR2 In this article, we will look at Oracle Golden Gate upgrade from 12.1 to 12.2 on a Linux server. Make sure you Allow golden gate... Oracle Golden Gate Bidirectional Replication Golden gate bidirectional replication is two-way unidirectional replication. Let us set up bidirectional replication for a single table...

  • Tablespace Point-in-time Recovery

    Tablespace Point-in-time Recovery If you already know about database point in time recovery, the issue with the database point in time recovery (DBPITR) is that the database is not accessible to users. Why Tablespace PITR Recovery? How it works? RMAN TSPITR Lab Activity CREATE TEST TABLESPACE & USER FOR ACTIVITY DROP THE TABLESPACE AND START TBPIT PROCESS CONNECT TO RMAN AND START RECOVERY Why Tablespace PITR Recovery? Let us assume that you have a problem only with one user and the transaction that the user executed has impacted one table that reside under one tablespace or one data file. So rather than performing the entire database point in time recovery, We can perform single tablespace point in time recovery just before the transactions were issued. In this way, only the affected tablespace will not be available for the users. Rest all database will still be up and running. How it works? TBPITR is different from DBPITR In DBPITR, the entire database is restored to back in time. In TBPITR, we take the tablespace back in time or before the wrong transactions are issued. It’s not straight restore and recover Once tablespace is dropped, you cannot restore at via RMAN as details of the tablespace are removed from the control file. Let us say somebody dropped the entire tablespace and then you try to issue the command restore tablespace from RMAN. The command will not work because once you drop the tablespace, control files will update that this tablespace does not exist. This means even if you try to restore it from RMAN, the tablespace will not be restored. TSPITR Recovery Methods There are three methods which you can use to recover dropped tablespace: Create DB clone with PITR before tablespace drop, export tablespace from clone DB and import into original database Perform entire database Point In Time recovery. This lead to downtime + loss of data Use automated RMAN TSPITR method RMAN TSPITR Its completely automated method. You just have to run the command TBPITR and everything will be taken care. To perform RMAN TSPITR, you need following Timestamp or SCN when tablespace was dropped → You can get this information form alert log A location with same space as your database size On the same server you need another Mount point where a database of the same size will be cloned. Once you run the TSPITR, it will Create a clone of the database in the location that you specify and then it will export the tablespace, import it into original database and then clean the cloned instance. And all the steps are automatically done by RMAN. Note: Make sure the location that you choose for the auxiliary instance of the cloned instance should have the same space or required space as an prod DB. Lab Activity In this activity, we will be doing the tablespace point in time recovery. Let us start. CREATE TEST TABLESPACE & USER FOR ACTIVITY create tablespace TBPITR datafile '/u01/app/oracle/oradata/proddb/tbpitr.dbf' size 10M; create user tbtest identified by tbtest; alter user tbtest default tablespace TBPITR; grant connect, resource to tbtest; grant select on scott.emp to tbtest; Now connect sqlplus with tbtest user and create a new emp table by selecting scott.emp table SQL> conn tbtest/tbtest SQL> create table emp as select * from scott.emp; Take RMAN backup for the activity. I am taking database backup in a specific location backup database plus archivelog format '/u02/rman_bkp/proddb/proddb_%U'; Check the database size and make sure you have a destination with same or more free space. Query to check database size col "Database Size" format a20 col "Free space" format a20 col "Used space" format a20 select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size" , round(sum(used.bytes) / 1024 / 1024 / 1024 ) - round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space" , round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space" from (select bytes from v$datafile union all select bytes from v$tempfile union all select bytes from v$log) used , (select sum(bytes) as p from dba_free_space) free group by free.p / DROP THE TABLESPACE AND START TBPIT PROCESS Now we will drop the tablespace and then we will see that we can recover it via RMAN or not. drop tablespace TBPITR including contents and datafiles; If you connect to tbtest user and query emp table, you will get below error SQL> conn tbtest/tbtest SQL> select * from emp: select * from emp * ERROR at line 1: ORA-00942: table or view does not exist Let us try to restore the tablespace using RMAN. Connect to RMAN and issue below command RMAN> restore tablespace TBPITR; ERROR: RMAN-20202: Tablespace not found in recovery catalog Reason behind the above error is that you control file is already been updated that this tablespace does not belong. Even though you have the backup, still you will not be able to restore as it is cleared from your control file. Open the alert log and check the time when the tablespace was dropped. In my activity, the timestamp is Mon Jan 16 11:01:542017 We will change this time as Mon Jan 16 11:01: 53 2017, One second before the time when the tablespace is dropped. Now we will restore this tablespace till this time. CONNECT TO RMAN AND START RECOVERY Before you fire below command, make sure you have same or more space in auxiliary destination as your original database. The auxiliary destination is the location where RMAN will automatically create the destination for the clone instance. Then export the tablespace from the cloned instance. It will put it back to the original location and then RMAN will clean this auxiliary destination also. run{ recover tablespace TBPITR until time "to_date('16-jan-17 11:01:53','dd-mon-rr hh24:mi:ss')" auxiliary destination '/u02/auxdest'; } When you fire the above command, RMAN will automatically decide one dummy instance name and start that instance. Then it will restore all the data files. It will create a database. DB will be created as per the given time which is just before the tablespace was dropped. So the tablespace will be there inside the DB. Then it will export the tablespace and put it into the production database. Once the DB clone is done. Then it will start the export of the tablespace. Once the export is done, it will start to import the tablespace into the production DB. Then it will delete the cloned files and instances. Check the original database if the tablespace restore is done or not select status,tablespace_name from dba_tablespaces; Once tablespace restore is done, it will be in offline mode. Make it online alter tablespace TBPITR online; select status,tablespace_name from dba_tablespaces; Now you can check whether you are able to query from the emp table as tbtest user. You will able to do it. Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Difference between 12cR1 and 12cR2 Multitenant database

    Difference between 12cR1 and 12cR2 Multitenant database There are lot of new features introduced by Oracle in 12c Release 2 version when compared to 12c Release 1. Here are some of the differences that you must know Local Undo Cloning of PDB’s Refreshable PDB Automatic Refresh PDB Relocate Cloning of PDB’s Enhancements to Unplug/Plug Flashback PDB 4k PDBs per CDB on Oracle Cloud and Oracle Exadata Memory Management Enhancement at PDB level Per-PDB Character Sets Automatic Workload Repository (AWR) Data at PDB Level System Access File Access Lockdown Profiles Data Guard Enhancements Software as a Service (SaaS) Local Undo In 12cR1, there is a global or shared undo for the entire CDB. With shared undo, before performing operations such as clone or unplug, it is necessary to check for any uncommitted transactions in the source PDB. This is to avoid problems with transactional consistency in the PDB after the clone or plug-in operation. (If any uncommitted transactions exist, the following error is issued: ORA-65126 – Pluggable database was not closed cleanly and there are active transactions that need to be recovered. In such cases, it is necessary to open the PDB read-write, and wait until the SMON process can clean them up.) In 12cR2 we introduce per-PDB or local undo. Local undo avoids these difficulties and thereby significantly improves important operations. Additionally, it enables many of the major new capabilities of Multitenant in 12cR2, including: Hot Clone Refreshable PDB PDB Relocate Flashback PDB Note : shared undo mode is still supported in 12cR2 Cloning of PDB’s In 12cR1 cloning of PDB was “cold cloning” because at the time of cloning there should not be any transaction running till cloning is in progress, so from 12cR2 oracle introduce “hot cloning” of PDB, it means while you clone your PDB, it could be available for read-write operations. No need to take the application down which running on PDB which is involved in cloning activity. condition apply —>Archive logging must also be enabled to perform hot clones. No change in the existing statement of cloning PDB: create pluggable database target from source; Refreshable PDB Let say for Example We have Development Environment for which we use a copy of Production PDB. Here in Production PDB all data keep changing and database size keep increasing all the time, so if we want to refresh Development Data, it becomes a lengthy process to drop PDB and create new PDB from production PDB all the time so Developer can get the Latest data. Refreshable PDB gives here a big advantage to all DBA working with Multitenant Architecture. We can create on taking a full clone of source PDB (Production PDB) consider it as “Golden Master Copy”. Now From That Golden Master PDB all, we can take clone in future which can be created in seconds or minutes even for very large databases. Here the condition is Refreshable PDB should be open in read-only mode and that PDB must be closed during the refresh operation. Now we need to refresh that Refreshable PDB from the source all the time to keep it aligns with the Production version. Refresh Options: Manual (we can refresh it manually when we want) ALTER SESSION SET CONTAINER=refreshable_pdb; ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE; ALTER PLUGGABLE DATABASE REFRESH; ALTER PLUGGABLE DATABASE OPEN READ ONLY; Automatic Refresh create pluggable database refreshable_pdb from source_PDB@DBLink refresh mode every 60;(interval in minutes) Can be altered in future: ALTER PLUGGABLE DATABASE refreshable_pdb REFRESH MODE EVERY 120 MINUTES; Note : once the PDB is made non-refreshable, it can’t be made refreshable again. PDB Relocate Moving PDBs between CDBs using plug/unplug method required downtime during plug/unplug operations. For large databases, it takes considerable time. PDB Relocate is a new method of moving a PDB between CDBs. This is introduced in 12cR2. The goal is to eliminate downtime completely. Technically, PDB Relocate is built on top of Refreshable PDB, which is built on top of a hot clone. PDB Relocate could be achieved in two steps, as follows Relocate PDB to the new server It involves cloning the PDB from its original location to its desired new location. In other words, After Completion of this step there will be two transactionally consistent copies of this PDB, one in the source server and one in the target server. For the duration of the operation, all transactions continue on the database in its original location (source server). Users of an application or applications connected to the database will be unaware that relocation is going on. Technically, this is the same as the creation of a Refreshable PDB, as described above. All existing application connections, and new connections made during this step, continue to be to the PDB in its original location. Open PDB in its new location This step completes the relocation. That is transparent to the application user, although at the end of the operation, connections to the PDB will have been switched from its original location to the new location. An important component of this step is to ensure that all transactions committed in the PDB in its source location are preserved in the target location. PDB Relocate Enhancements to Unplug/Plug Flashback PDB 4k PDBs per CDB on Oracle Cloud and Oracle Exadata Memory Management Enhancement at PDB level Per-PDB Character Sets Automatic Workload Repository (AWR) Data at PDB Level System Access File Access Lockdown Profiles Data Guard Enhancements Software as a Service (SaaS) Difference between 12cR1 and 12cR2 Multitenant database local Undo In 12cR1, there is a global or shared undo for the entire CDB. With shared undo, before performing operations such as clone or unplug, it is necessary to check for any uncommitted transactions in the source PDB. This is to avoid problems with transactional consistency in the PDB after the clone or plug-in operation. (If any uncommitted transactions exist, the following error is issued: ORA-65126 – Pluggable database was not closed cleanly and there are active transactions that need to be recovered. In such cases, it is necessary to open the PDB read-write, and wait until the SMON process can clean them up.) In 12cR2 we introduce per-PDB or local undo. Local undo avoids these difficulties and thereby significantly improves important operations. Additionally, it enables many of the major new capabilities of Multitenant in 12cR2, including Hot Clone Refreshable PDB PDB Relocate Flashback PDB Note : shared undo mode is still supported in 12cR2 Cloning of PDB’s In 12cR1 cloning of PDB was cold cloning because at the time of cloning there should not be any transaction running till cloning is in progress, so from 12cR2 oracle introduce hot cloning of PDB, it means while you clone your PDB, it could be available for read-write operations. No need to take the application down which running on PDB which is involved in cloning activity. condition apply —>Archive logging must also be enabled to perform hot clones. No change in the existing statement of cloning PDB: create pluggable database target from the source; Refreshable PDB Let say for Example We have a Development Environment for which we use a copy of Production PDB. Here in Production PDB all data keep changing and database size keep increasing all the time, so if we want to refresh Development Data, it becomes a lengthy process to drop PDB and create new PDB from production PDB all the time so Developer can get the latest data. Refreshable PDB gives here a big advantage to all DBA working with Multitenant Architecture. We can create on taking a full clone of source PDB (Production PDB) consider it as Golden Master Copy . Now From That Golden Master PDB all, we can take clone in future which can be created in seconds or minutes even for very large databases. Here the condition is Refreshable PDB should be open in read-only mode and that PDB must be closed during the refresh operation. Now we need to refresh that Refreshable PDB from the source all the time to keep it aligns with the Production version. Refresh Options Manual (we can refresh it manually when we want) ALTER SESSION SET CONTAINER=refreshable_pdb; ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE; ALTER PLUGGABLE DATABASE REFRESH; ALTER PLUGGABLE DATABASE OPEN READ ONLY; Automatic Refresh create pluggable database refreshable_pdb from source_PDB@DBLink refresh mode every 60;(interval in minutes) Can be altered in future: ALTER PLUGGABLE DATABASE refreshable_pdb REFRESH MODE EVERY 120 MINUTES; Note : once the PDB is made non-refreshable, it can’t be made refreshable again. PDB Relocate Moving PDBs between CDBs using plug/unplug method required downtime during plug/unplug operations. For large databases, it takes considerable time. PDB Relocate is a new method of moving a PDB between CDBs. This is introduced in 12cR2. The goal is to eliminate downtime completely. Technically, PDB Relocate is built on top of Refreshable PDB, which is built on top of a hot clone. PDB Relocate could be achieved in two steps, as follows Relocate PDB to the new server It involves cloning the PDB from its original location to its desired new location. In other words, After Completion of this step there will be two transactionally consistent copies of this PDB, one in the source server and one in the target server. For the duration of the operation, all transactions continue on the database in its original location (source server). Users of an application or applications connected to the database will be unaware that relocation is going on. Technically, this is the same as the creation of a Refreshable PDB, as described above. All existing application connections, and new connections made during this step, continue to be to the PDB in its original location. Open PDB in its new location This step completes the relocation. That is transparent to the application user, although, at the end of the operation, connections to the PDB will have been switched from its original location to the new location. An important component of this step is to ensure that all transactions committed in the PDB in its source location are preserved in the target location. Enhancements to Unplug/Plug In 12.1, export and import of keys were achieved separately from the PDB unplug/plug operation. In 12.2 the migration of encryption keys is simplified through integration with the unplug/plug operation itself. Technically the encryption keys are transported via the XML manifest file, where they are encrypted by a shared secret. This is achieved with some additional clauses in the unplug and plug statements as follows Unplug : Executed as SysDBA in Root container of source CDB: alter pluggable database PDB unplug into PDB_manifest.xml encrypt using ; Plug : Executed as SysDBA in Root container of destination CDB: create pluggable database PDB using PDB_manifest.xml decrypt using ; Flashback PDB In Oracle 12cR1 there is no PDB-level flashback database. So We need to flashback the entire CDB with it’s all associated PDBs which means loss of data for all PDBs during Flashback Database operation. Flashback PDB is now fully supported with Oracle 12R2 for that oracle introduced local undo feature in 12cR2. 4k PDBs per CDB on Oracle Cloud and Oracle Exadata Oracle 12cR1 we can have 252 PDBs under one CDB and In Oracle 12cR2 we can have 4096 PDBs under one CDB (on Oracle Cloud and Oracle Exadata only). In other platforms, the limit remains 252 PDBs per CDB. Memory Management Enhancement at PDB level Now with Oracle 12cR2, it is possible to set the following parameters at PDB level (which were previously modifiable only at CDB level in 12cR1): SGA_TARGET SGA_MIN_SIZE (new in 12cR2) DB_CACHE_SIZE DB_SHARED_POOL_SIZE PGA_AGGREGATE_LIMIT PGA_AGGREGATE_TARGET Per-PDB Character Sets From 12cR2 Oracle now it is possible to set character set for each PDB under one CDB. It is not mandatory to have same character set for CDB and all associated PDB which was mandatory in Oracle 12cR1. Automatic Workload Repository (AWR) Data at PDB Level In Oracle 12cR1 AWR report were stored at CDB root container which means if you unplug PDB from one CDB to another then you will loss AWR data for that PDB. From Oracle 12cR2 AWR report available at PDB level. System Access While using HOST commands from SQL Plus to interact with Operating System from within Oracle Database all operations are performed under Oracle OS user credentials. In 12cR2 Oracle introduce a new PDB level parameter – PDB OS Credential. Where appropriate, this can be used to identify an OS user (presumably one far less privileged than the Oracle user) which will be used when the OS is accessed. File Access In 12cR2 Oracle introduce two new clauses to the create pluggable database statement – Path_Prefix and Create_File_Dest. These control the placement of and access to the PDB’s files by specifying mount points within the file system so that each PDB’s datafiles and directory objects are isolated in a sub-branch of the file system. Lockdown Profiles From 12cR2 Oracle introduced a lockdown profile mechanism to restrict certain operations or functionalities in a PDB. This new Multitenant feature is managed by a CDB administrator and can be used to restrict user access in a particular PDB. A lockdown profile can prevent PDB users from: Executing certain SQL statements, such as ALTER SYSTEM and ALTER SESSION, Running procedures that access the network (e.g. UTL_SMTP, UTL_HTTP) Accessing a common user’s objects Interacting with the OS (In addition to the capabilities covered by PDB_OS_CREDENTIAL) Making unrestricted cross-PDB connections in a CDB Taking AWR snapshots Using JAVA partially or as a whole Using certain database options such as Advanced Queueing and Partitioning. A single lockdown profile can have several rules defined in it. In other words, you don’t have to create a lockdown profile for every restriction you want to implement it.A PDB can have only one lockdown profile active at a time. The restrictions enforced by a lockdown profile are PDB-wide, they affect every single user including the SYS and SYSTEM Example Commands: SQL> create lockdown profile pdb_profile; Lockdown Profile created. SQL> alter lockdown profile pdb_profile disable statement=('alter system') clause=('set') option all; SQL> alter lockdown profile pdb_profile disable statement =(‘ALTER SYSTEM’)clause = (‘SET’)option = ALL EXCEPT(‘plsql_code_type’,’plsql_debug’,’plsql_warnings’); SQL> alter lockdown profile pdb_profile disable feature=('NETWORK_ACCESS'); DBA_LOCKDOWN_PROFILES to see the details of our lockdown profile SQL> select profile_name, rule_type, rule, clause, clause_option, status, users from DBA_LOCKDOWN_PROFILES; Data Guard Enhancements In 12cR1, ENABLED_PDBS_ON_STANDBY initialization parameter only supported two values: all PDBs or none The ENABLED_PDBS_ON_STANDBY parameter is only applicable to the physical standby database can accept a list of PDB names or a glob pattern such as “MYPDB?” or “MYPDB*a”, “MYPDB2” Glob pattern rules are similar to regular expression rules in common UNIX shells asterisk (*) and question mark (?) wildcard characters are supported. The question mark (?) represents a single unknown character; the asterisk (*) represents matches to any number of unknown characters. ENABLED_PDBS_ON_STANDBY=“*” means that all PDBs will be created on physical standby in Oracle 12cR1 and Oracle 12cR2 ENABLED_PDBS_ON_STANDBY=“MYPDB1*” means that MYPDB1A, MYPDB1B, and MYPDB1C will be created on the physical standby in Oracle 12cR2 ENABLED_PDBS_ON_STANDBY=“MYPDB*A” means that MYPDB1A, MYPDB2A, and MYPDB3A will be created on physical standby in Oracle 12cR2 Software as a Service (SaaS) Oracle Multitenant for Software as a Service Multitenancy implemented by the Database, not the Application in Oracle 12cR2. In 12cR2, to address these important SaaS requirements, oracle introduce a new concept Application Container. The user now can create a pluggable database as Application Container from 12cR2. Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

Search Results

bottom of page