top of page

Results found for ""

  • Interview Questions | DBA Genesis Support

    Interview Questions Oracle DBA Interview Questions & Answers Explain about data and how do you store data? Data is any value which we store for future reference. There are different types of data... Senior Oracle DBA Interview Questions Put your learnings to test by trying to answer below Senior Oracle DBA Interview Questions. I am sure these will definitely challenge... Oracle Golden Gate Interview Questions Here are some of the frequently asked Oracle Golden Gate interview questions and answers. Enjoy! Q. What is the significance of Oracle... Oracle SQL Interview Questions Here are some of the frequently asked SQL interview questions. Enjoy! Q1. What is the difference between VARCHAR & VVARCHAR2? Both... Oracle 12c Multi-tenant Interview Questions Interview questions specific to Oracle 12c Multi-tenant architecture. Enjoy! Q. What are the major changes in architecture for 12c? From...

  • 11gR2 Non-RAC to RAC Migration

    11gR2 Non-RAC to RAC Migration In this article, we will be looking at Non-RAC to RAC migration using DBCA. I have RACN1 and RACN2 where I will show you how to migrate a single instance database running on machine DT_VM . Create Template Using DBCA Copy Template Files to RAC Node 1 Create RAC Database From Template Files Verify RAC Database Create Template Using DBCA Under DBCA, we have an option where you can create a template. A Template is basically like the entire structure of the database. And you can even include the data with the structure. We are going to create (rather export) a DBCA template on DT_VM machine along with source data. Let’s take when you are creating the template file. Basically, by default it will have structure of all the tables and all the objects of database and you can optionally choose to also include the content of the table. Like let’s take I have a table which having one lakh rows then I can even have the rows in the template so that when I create database on the destination side using DBCA, I will first get all the object and I will also get the data. That is why for all the small scale databases mostly DBCA can be used as a Replication method or a Cloning method rather going for RMAN or other tool. Let us create a template using single instance database. On DT_VM (non-rac database) =========================== # su - oracle $ echo $ORACLE_SID Just to make sure that this is not a RAC database, we will check the CLUSTER parameter. SQL> show parameter cluster; Now let us start DBCA and follow the below screenshots On DT_VM (non-rac database) =========================== $ dbca Click Next Select Manage Templates and Click Next Choose Create a Database Template and select From an existing database (structure as well as data) . Click on Next Select the database which you are going to convert into RAC database and click on Next Give a name to template, make a note of the Template Datafile location and click Next Choose Convert the file locations to use OFA structure and click on Finish Click on OK Click on No Goto the Template Datafile location On DT_VM (non-rac database) =========================== $ cd /u01/app/oracle/product/11.2.0/dbhome_1/assistants/dbca/tmplates $ ls -lrt my_rac* Now we can see three files are created. In the next step, we need to copy these files to RAC node 1. Copy Template Files to RAC Node 1 Scp the DBCA template files to RAC node 1 On DT_VM (non-rac database) =========================== $ scp my_rac_migration.* oracle@192.168.1.50:/u01/app/oracle/product/11.2.0/dbhome_1/assistants/dbca/tmplates Create RAC Database From Template Files Start DBCA on RAC node 1 On RACN1 ======== $ dbca Select Oracle Real Application Clusters database and click on Next Choose Create a Database and click on Next Select the Template that you imported from non-RAC database server. In our case, it is my_rac_migration. Click Next Select Admin-Managed , give Global Database Name, click on Select All and then click on Next Disable Configure Enterprise manage then select Automatic Maintenance Tasks You can keep the maintenance tasks enabled and just click on Next Give password for SYS and SYSTEM account and click Next Click on Yes Make sure the storage type is ASM, Database Areas is set to +DATA and then click on Next Leave default and click on next Click next Click Next Click Next Make sure Create Database is selected and click on Finish Review the database summary and click on OK Now your cluster database creation will start Once done, click on Exit Verify RAC Database Now that the DB migration is done, we are ready to verify how our RAC database is running On RACN1 ======== SQL> select instance_name, instance_number from v$instance; INSTANCE_NAME INSTANCE_NUMBER --------------------- --------------- racdb1 1 SQL> select database_name, open_mode from v$database; DATABASE_NAME OPEN_MODE ---------------- ---------- RACDB READ WRITE SQL> show parameter cluster_database; NAME TYPE VALUE ----------------- --------- ----------- cluster_database boolean TRUE Run same commands from RAC node 2 On RACN2 ======== SQL> select instance_name, instance_number from v$instance; INSTANCE_NAME INSTANCE_NUMBER --------------------- --------------- racdb2 2 SQL> select database_name, open_mode from v$database; DATABASE_NAME OPEN_MODE ---------------- ---------- RACDB READ WRITE SQL> show parameter cluster_database; NAME TYPE VALUE ----------------- --------- ----------- cluster_database boolean TRUE This is how you convert the Non- RAC to RAC database using DBCA. But I would recommend this method only when your database size small like around below 100 GB. Because above 100 GB will take lot of time to convert. If your database Sizes above 100 GB, I would suggest to go with RMAN, Export / Import, Restore and Recover method from cold backup or hot backup. These are the other methods which also convert the Non- RAC to RAC database. Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Find scheduler jobs in oracle

    Find scheduler jobs in oracle The below command will help you check Scheduler jobs that are configured inside database SELECT JOB_NAME, STATE FROM DBA_SCHEDULER_JOBS where job_name='RMAN_BACKUP'; Query to check currently running scheduler jobs SELECT * FROM ALL_SCHEDULER_RUNNING_JOBS; All the DBA Scheduler jobs create logs. You can query below and check the details of job logs select log_id, log_date, owner, job_name from ALL_SCHEDULER_JOB_LOG where job_name like 'RMAN_B%' and log_date > sysdate-2; select log_id,log_date, owner, job_name, status, ADDITIONAL_INFO from ALL_SCHEDULER_JOB_LOG where log_id=113708; Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Linux Firewall with iptables and firewalld

    Linux Firewall with iptables and firewalld In this article will be covering details regarding iptables and firewalld which helps in Linux firewall management. We will also be looking at how to enable specific ports (1521 for Oracle) inside iptables. Linux Firewall Status Linux Disable Firewall Linux Enable Firewall Enable Ports in Linux Read more about Linux iptables vs Linux firewall Linux Firewall status The Linux firewalld command will let you check Linux firewall status. It will show you the current status Active in case firewall is running systemctl status firewalld Linux Disable Firewall For practicing Oracle on Linux, you might need to stop the Linux firewall so that you can connect applications to database listener. Below commands will permanently disable Linux firewall service firewalld stop systemctl disable firewalld Linux Enable Firewall Just in case you would like to enable Linux firewall after disabling it, use below commands service firewalld start systemctl enable firewalld Enable Ports in Linux On some servers, port 1521 will not be enabled by default because of security reasons. You can enable this specific port inside linux using below commands. Enable 1521 Port in Linux If you are working on Oracle Linux 5 or 6 version, use Linux iptables command to enable specific ports as root user iptables - I INPUT - p tcp -- dport 1521 - j ACCEPT If you would like to open any specific port in Linux, just replace the port number (1521) with new port number. Enable Port Range in Linux To open multiple port ranges in Linux, use below command iptables -A INPUT -p tcp -m multiport --dports 7101:7200,4889:4898,1159,4899:4908,7788:7809,3872,1830:1849 -j ACCEPT Enable Port in Oracle Linux 7 In some Linux versions, below command works fine firewall - cmd -- permanent -- add - port =1521/ tcp Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Oracle Archivelog Mode

    Oracle Archivelog Mode Oracle Database lets you save filled groups of redo log files to one or more offline destinations, known collectively as the archived redo log , or more simply the archive log. The process of turning redo log files into archived redo log files is called archiving . This process is only possible if the database is running in ARCHIVELOG mode . Checking Archivelog Mode Set Archivelog Destination Enable Archivelog Mode Disable Arcivelog Mode Perform Log Switch Enable Archivelog Mode in RAC How to Estimate Archive Destination Space Checking Archivelog Mode Use below command to check the archivelog mode inside the oracle database SQL> archive log list; You can also use below command SQL> SELECT LOG_MODE FROM SYS.V$DATABASE; Set Archivelog Destination You must set a destination for archivelog files SQL> alter system set log_archive_dest_1='location=/u01/proddb/arch' Enable Archivelog Mode Please note that in order to enable archivelog mode, you must bounce the database SQL> Shut immediate; SQL> Startup mount; SQL> alter database archivelog; SQL> alter database open; SQL> archive log list; Disable Archivelog Mode The database must bounced even when you want to disable archivelog mode SQL> shut immediate; SQL> startup mount; SQL> alter database noarchivelog; SQL> alter database open; SQL> archive log list; Performing Log Switch While your database is running in archivelog mode, you can perform force log switch. This will archive the current redo log file and force LGWR to start over-writing other redo log member SQL> alter system switch logfile; Enable Archivelog Mode in RAC Let su check the db_recovery_file_dest_size parameter and add space to it SQL > show parameter recovery ; SQL > alter system set db_recovery_file_dest_size = '20G' scope = both sid = '*' ; If DB_RECOVERY_FILE_DEST is set to disk group, LOG_ARCHIVE_FORMAT is ignored If DB_RECOVERY_FILE_DEST is set to disk group location, LOG_ARCHIVE_FORMAT comes in effect On Node 1 ========= ALTER SYSTEM SET log_archive_dest_1 = 'location=+FRA/RAC/ARCH/' SCOPE = spfile ; ALTER SYSTEM SET log_archive_format = 'arch_%t_%s_%r.arc' SCOPE = spfile ; . / srvctl stop database - d RAC sqlplus / as sysdba startup mount; alter database archivelog; alter database open ; select log_mode from v$database ; How to Estimate Archive Destination Space The below query gives results of archive generation in oracle database. Use below query to find the archive space requirements and you can use it to estimate the archive destination size perfectly well SELECT A.*, Round(A.Count#*B.AVG#/1024/1024/1024) Daily_Avg_gb FROM (SELECT To_Char(First_Time,'YYYY-MM-DD') DAY, Count(1) Count#, Min(RECID) Min#, Max(RECID) Max# FROM v$log_history GROUP BY To_Char(First_Time,'YYYY-MM-DD') ORDER BY 1 ) A, (SELECT Avg(BYTES) AVG#, Count(1) Count#, Max(BYTES) Max_Bytes, Min(BYTES) Min_Bytes FROM v$log ) B; The above query will list total number of archives generated per day along with total size in GBs. You can easily get average archive size in GB per day and then multiply it with 30 to get archive destination space requirements for next 1 month! Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • ASM Related Background Process

    ASM Related Background Process Oracle ASM instance is built on same Oracle database instance architecture. Most of the ASM background processes are same as the database background process. ASM Background Process in DB Instance ASM Background Process in ASM Instance ASM Background Process in DB Instance Oracle database that uses ASM disks, two new background processes exists RBAL ASMB ASMB performs communication between the database and the ASM instance. RBAL performs the opening and closing of the disks in the disk groups on behalf of Oracle database. This RBAL is the same process as in ASM instance but it performs a different function. To find asm background process in oracle db instance, connect to Oracle database and issue below query SQL> select sid, serial#, process, name, description from v$session join v$bgprocess using(paddr); Note the ASMB and RBAL processes in the above list. You can even query using the process id at OS level (ps -ef|grep ). ASM Background Process in ASM Instance Oracle introduced two new background processes first in their 10g version: RBAL ARBn The RBAL performs rebalancing when a disk is added or removed. The ARBn performs actual extent movement between the disks in the diskgroup. To find asm background process inside asm instance, connect to ASM instance and issue below query sqlplus / as sysasm SQL> select sid, serial#, process, name, description from v$session join v$bgprocess using(paddr); Note: The ARBn process is started only when there is rebalancing operating happening inside diskgroup. Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Database Normalization

    Database Normalization Database normalization is the process of refining the data in accordance with a series of normal forms. This is done to reduce data redundancy and improve data integrity. This process divides large tables into small tables and links them using relationships. The concept of normalization was invented by Edgar Codd and he introduced First Normal form before moving ahead with other types of normalization forms like Second and Third Normal forms. Normalization forms Key Terms Step by Step Normalization Example 1NF - First Normal Form 2NF - Second Normal Form 3NF - Third Normal Form Assignment There are further enhancements to theory of normalization and it is still being developed. There is even 6th normal form but in any practical scenario, normalization achieves its best shape in 3rd Normal form . Key terms Column – Attribute Row – Tuple Table – Relation Entity – Any real world object that makes sense Step by Step Normalization Example Let us look at a library table that maintains all the books they rent out in one single table Now let us push this data from various normal forms and see how we can refine the data. 1NF - First Normal Form The rules of the first normal form are Each table cell should contain a single/atomic value Every record in the table must be unique Let us first convert the Books_Main_Table into 1NF As per the 1NF rules, our Books Main Table looks good. Before we proceed with 2NF and 3NF, we need to understand key columns. Key / non-key columns Any column (or group of columns) in a table which can uniquely identify each row is known as key column. Example Phone number Email id Student roll number Employee id These are some columns that will always remain unique to every record inside the table. Such columns are known as key columns inside the table. Any column apart from key columns is known as non-key column. Primary key A primary key is a single column value which uniquely identifies each record in a table. In RDBMS, primary key must satisfy below Primary key must be unique Primary key cannot be null Every record will have primary key value Composite Key Sometimes its hard to define unique records with one single column. In such cases, we can have two or more columns that uniquely identify each record in a table. Such columns are known as composite key. For example Name + Address First Name + DOB + Father Name Now that we know about key / non-key columns, let us move to 2NF. 2NF - Second Normal Form The rules of the second normal form are Table must be in 1NF Every non-key attribute must be fully dependent on key attributes We see that our Books_Main_Table does not have any primary key, in such cases, we will have to introduce a new key column like Membership ID . To make Books_Main_Table into 2NF, we need to see how columns are closely related: Membership ID has a salutation, name, and address Membership ID has books issued on their name With this logic in mind, we will have to divide our Books_Main_Table into two tables If you see the above tables, we have Membership ID in both tables but in Membership_Details_table, it is a primary key column and in Books_Issued_table, it is a non-key column. Foreign Key Till now we have seen Primary key and composite key. A foreign key refers to a primary key column of another table. This helps in connecting two tables (and defines a relation between two tables). A foreign key must satisfy below Foreign key column name can be different than primary key column name Unlike primary key, then need not be unique (see Books_Issued_Table above) Foreign key column can be null even though primary key column cannot Reason for Foreign key When a user tries to insert a record into Books_Issued_Table and if there is no membership ID exists in Membership_Details_Table , it will be rejected. This way, we maintain data integrity in RDBMS. If there is no record with Membership ID in the parent table, it will be rejected and database will throw an error. 3NF - Third Normal Form The rules of the third normal form are Data must be in 2NF No transitive functional dependencies What is a transitive dependency? In simple terms, if changing a non-key column causes any other non-key column to change, then it's called a transitive dependency. In our example, if we change the Full Name of the customer, it might change the Salutation Final 3NF Tables To move the Membership_Details_Table into 3NF, we need to further divide the table into below We have divided the Membership_Details_Table into a new Salutation_table. Assignment If you see the Books_Issued_Table , it still does not have a key column. What do you think should be the key column for the Books_Issued_Table? Or do we need to introduce a new column? Further Read Boyce Codd Normal Form (BCNF) Fifth Normal Form (5NF) Sixth Normal Form (6NF) Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Configure Golden Gate Initial Load and Change Sync

    Configure Golden Gate Initial Load and Change Sync Oracle Golden Gate Initial load is a process of extracting data records from the source database and loading those records onto the target database. Initial load is a data migration process that is performed only once. Create Sample Table Configure Change Sync Configure Initial Load Start Initial Load & Change Sync Real-Time Initial Load Process Configure Change Sync Configure Initial Load Start Initial Load & Change Sync Create Sample Table Let us create EMP table from SCOTT.EMP for FOX user On Proddb: ========== sqlplus / as sysdba Create table fox.emp as select * from scott.emp; SQL> alter table fox.emp add primary key ("EMPNO"); On the target, just create the EMP table without any data in it. Generate the FOX.EMP table DDL command On Proddb: ========== set heading off; set echo off; Set pages 999; set long 90000; select dbms_metadata.get_ddl('TABLE','EMP','FOX') from dual; In the above output, change FOX to TOM and execute the output of above command on target ggdev. Configure Change Sync First, we will have to configure change sync for FOX.EMP table. Connect to database via Golden Gate On proddb: ========== cd $GG_HOME ./ggsci GGSCI> dblogin userid ogg, password ogg Successfully logged into database. Add table level supplemental logging via Golden Gate On GGPROD: ========== GGSCI> add trandata FOX.EMP Logging of supplemental redo data enabled for table FOX.EMP. TRANDATA for scheduling columns has been added on table 'FOX.EMP'. Create GG Extract Process On GGPROD: ========== GGSCI> ADD EXTRACT PFOXE1, INTEGRATED TRANLOG, BEGIN NOW EXTRACT (Integrated) added. GGSCI> register extract PFOXE1 database Create local trail file for extract process GGSCI> add exttrail /u01/app/oracle/product/gg/dirdat/pf, extract PFOXE1 Create parameter file for extract process GGSCI> edit param PFOXE1 EXTRACT PFOXE1 USERID ogg, PASSWORD ogg EXTTRAIL /u01/app/oracle/product/gg/dirdat/pf TABLE FOX.EMP; Create GG DP Process GGSCI> Add extract PFOXD1, EXTTRAILSOURCE /u01/app/oracle/product/gg/dirdat/pf Create remote trail file for extract process GGSCI> Add rmttrail /u01/app/oracle/product/gg/dirdat/rf, extract PFOXD1 Create parameter file for data pump process GGSCI> edit param PFOXD1 EXTRACT PFOXD1 USERID ogg, PASSWORD ogg RMTHOST ggdev, MGRPORT 7809 RMTTRAIL /u01/app/oracle/product/gg/dirdat/rf TABLE FOX.EMP; Create GG Replicate on target On GGDEV: ========= GGSCI> dblogin userid ogg, password ogg GGSCI> add replicat DFOXR1, integrated exttrail /u01/app/oracle/product/gg/dirdat/rf Create parameter file for replicat on target GGSCI> edit param DFOXR1 REPLICAT DFOXR1 USERID ogg, PASSWORD ogg ASSUMETARGETDEFS MAP FOX.EMP TARGET TOM.EMP; Start manager on source and target On GGPROD: ========== GGSCI> start mgr On GGDEV: ========= GGSCI> start mgr Configure Golden Gate Initial Load Now, we need to configure golden gate initial load extract and replicat. Add initial load Extract on source On Proddb: ========== GGSCI> ADD EXTRACT INITLE, SOURCEISTABLE Edit parameter file for initial load extract GGSCI> EDIT PARAM INITLE EXTRACT INITLE userid ogg, password ogg RMTHOST ggdev, mgrport 7809 RMTTASK REPLICAT, GROUP INITLR TABLE FOX.EMP; Add initial load Replicat on target On Devdb: ========= GGSCI> ADD REPLICAT INITLR, SPECIALRUN Edit parameter file for initial load replicat GGSCI> EDIT PARAM INITLR REPLICAT INITLR userid ogg, password ogg ASSUMETARGETDEFS MAP FOX.EMP, TARGET TOM.EMP; Start Initial Load & Change Sync First start the change sync extract and data pump on source. This will start capturing changes while we perform the initial load. Do not start replicat at this point On proddb: ========== GGSCI> start PFOXE1 GGSCI> start PFOXD1 Now start the initial load extract. Remember, this will automatically start the initial load replicat on target On proddb: ========== GGSCI> start INITLE GGSCI> INFO INITLE EXTRACT INITLE Last Started 2016-01-11 15:59 Status STOPPED Checkpoint Lag Not Available Log Read Checkpoint Table FOX.EMP 2016-01-11 15:59:57 Record 14 Task SOURCEISTABLE Verify on target if all the 14 records have been loaded on target table or not On Devdb: ========= sqlplus / as sysdba select * from tom.emp; Now start the change sync replicat On Devdb: ========= GGSCI> start DFOXR1 Note: At this stage, you can delete the initial load extract and replicat process as they are no longer needed. If you get below error while starting the initial load extract 2017-09-18 12:23:40 ERROR OGG-01201 Error reported by MGR : Access denied. 2017-09-18 12:23:40 ERROR OGG-01668 PROCESS ABENDING. Add below line to ggdev mgr ACCESSRULE, PROG *, IPADDR *, ALLOW GGSCI> refresh mgr Real-Time Initial Load Process The real time initial load is a little different than our previous initial load activity. Let us create DEPT table from SCOTT.DEPT for FOX user On Proddb: ========== sqlplus / as sysdba Create table fox.dept as select * from scott.dept; SQL> alter table fox.dept add primary key ("DEPTNO"); On the target, just create the DEPT table without any data into it. Generate the FOX.DEPT table DDL command On Proddb: ========== set heading off; set echo off; Set pages 999; set long 90000; select dbms_metadata.get_ddl('TABLE','DEPT','FOX') from dual; In the above output, change FOX to TOM and execute the output of above command on target ggdev. Configure Change Sync Connect to database via Golden Gate On proddb: ========== cd $GG_HOME ./ggsci GGSCI> dblogin userid ogg, password ogg Successfully logged into database. Add table level supplemental logging via Golden Gate On GGPROD: ========== GGSCI> add trandata FOX.DEPT Logging of supplemental redo data enabled for table FOX.DEPT. TRANDATA for scheduling columns has been added on table 'FOX.DEPT'. Create GG Extract Process On GGPROD: ========== GGSCI> ADD EXTRACT PFOXE2, INTEGRATED TRANLOG, BEGIN NOW EXTRACT (Integrated) added. GGSCI> register extract PFOXE2 database Create local trail file for extract process GGSCI> add exttrail /u01/app/oracle/product/gg/dirdat/p2, extract PFOXE2 Create parameter file for extract process GGSCI> edit param PFOXE2 GGSCI> edit param PFOXE2 EXTRACT PFOXE2 USERID ogg, PASSWORD ogg EXTTRAIL /u01/app/oracle/product/gg/dirdat/p2 TABLE FOX.DEPT; Create GG DP Process GGSCI> Add extract PFOXD2, EXTTRAILSOURCE /u01/app/oracle/product/gg/dirdat/p2 Create remote trail file for extract process GGSCI> Add rmttrail /u01/app/oracle/product/gg/dirdat/r2, extract PFOXD2 Create parameter file for data pump process GGSCI> edit param PFOXD2 EXTRACT PFOXD2 USERID ogg, PASSWORD ogg RMTHOST ggdev, MGRPORT 7809 RMTTRAIL /u01/app/oracle/product/gg/dirdat/r2 TABLE FOX.DEPT; Create GG Replicate on target On GGDEV: ========= GGSCI> dblogin userid ogg, password ogg GGSCI> add replicat DFOXR2, integrated exttrail /u01/app/oracle/product/gg/dirdat/r2 Create parameter file for replicat on target GGSCI> edit param DFOXR2 REPLICAT DFOXR2 USERID ogg, PASSWORD ogg ASSUMETARGETDEFS MAP FOX.DEPT TARGET TOM.DEPT; Start manager on source and target On GGPROD: ========== GGSCI> start mgr On GGDEV: ========= GGSCI> start mgr Configure Initial Load Add initial load Extract on source On Proddb: ========== GGSCI> ADD EXTRACT INITLE2, SOURCEISTABLE Edit parameter file for initial load extract GGSCI> EDIT PARAM INITLE2 EXTRACT INITLE2 userid ogg, password ogg RMTHOST ggdev, mgrport 7809 RMTTASK REPLICAT, GROUP INITLR2 TABLE FOX.DEPT; Add initial load Replicat on target On Devdb: ========= GGSCI> ADD REPLICAT INITLR2, SPECIALRUN Edit parameter file for initial load replicat GGSCI> EDIT PARAM INITLR2 REPLICAT INITLR2 userid ogg, password ogg ASSUMETARGETDEFS MAP FOX.DEPT, TARGET TOM.DEPT; Start Initial Load & Change Sync First start the change sync extract and data pump on source. This will start capturing changes while we perform the initial load. Do not start replicat at this point On proddb: ========== GGSCI> start PFOXE2 GGSCI> start PFOXD2 At this stage capture the database SCN number. We will start the replicate on target from this SCN onwards On proddb: ========== SQL> select current_scn from v$database; Let us make an update into DEPT table. This update will be captured by both Initial load and also change capture. Later we will analyze how GG handles conflicts On proddb: ========== sqlplus fox/fox update dept set loc='INDIA' where deptno=30; commit; Now start the initial load extract. Remember, this will automatically start the initial load replicat on target On proddb: ========== GGSCI> start INITLE2 GGSCI> INFO INITLE2 EXTRACT INITLE Last Started 2016-01-11 15:59 Status STOPPED Checkpoint Lag Not Available Log Read Checkpoint Table FOX.DEPT 2016-01-11 15:59:57 Record 4 Task SOURCEISTABLE Verify on target if all the 4 records have been loaded on target table or not On Devdb: ========= sqlplus / as sysdba select * from tom.dept; Let us make some changes to DEPT table and if everything goes well, we must see this change after starting replicat On proddb: ========== sqlplus fox/fox update dept set loc='US' where deptno=40; commit; Now start the change sync replicat On Devdb: ========= GGSCI> start DFOXR2, aftercsn Note: At this stage, you can delete the initial load extract and replicat process as they are no longer needed. Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Linux Project - Monitor Server Disk Space

    Linux Project - Monitor Server Disk Space In this Linux project we will write a shell script that will monitor the disk space and put the info in a file every 20 mints. Setting up Oracle Linux 7 on Virtual Box How to find disk free Script to log disk space Schedule Script via crontab Setting up Oracle Linux 7 on Virtual Box Follow these detailed steps for the exact process we consistently use to create a virtual machine (VM) and practice Oracle on Oracle VirtualBox. Step-by-Step Guide: Setting Up Oracle Linux 7 on Oracle VirtualBox How to Find Disk Free Space? We use the below command to find the disk free space on Linux df -h Script to Log Disk Space Creating a script to log disk space simplifies monitoring and helps manage storage effectively, ensuring timely updates on disk utilization. Create Scripts Folder: to store and organise various scripts mkdir /tmp/scripts Storage Space log: This file will store storage information touch /tmp/scripts/storage_space.log Permission for Log File: chmod 777 /tmp/scripts/storage_space.log Shell script executing 'df -h' to retrieve and log free disk space information, saving results in storage_space.log with timestamps for each entry #! /bin/bash # To find the free disk space and save it in a log file echo "********************************************" >> /tmp/scripts/storage_space.log date >> /tmp/scripts/storage_space.log echo "********************************************" >> /tmp/scripts/storage_space.log df -h >> /tmp/scripts/storage_space.log # To insert a space between each log entry echo >> /tmp/scripts/storage_space.log Permission for Shell Script: chmod 777 /tmp/scripts/get_storage.sh Schedule Script via crontab Creation of crontab: Automate script execution at 20-minute intervals crontab -e */20 * * * * /tmp/scripts/get_storage.sh Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Mount EFS on EC2 Instance

    Mount EFS on EC2 Instance You might have created an Elastic File System (EFS) and now would like to mount it on EC2 instance. Get EFS link Mount EFS on EC2 Add to /etc/fstab Get EFS link Check the EFS details and you should get a link which looks like below fs-06b8137f.efs.us-east-2.amazonaws.com:/ Mount EFS on EC2 Create a directory on your server where you will mount the EFS mkdir /efs Mount the efs on /efs directory mount -t nfs4 fs-06b8137f.efs.us-east-2.amazonaws.com:/ /efs Check if mounting is done properly df -hT /efs Add to /etc/fstab Make the mounting permanent under /etc/fstab file vi /etc/fstab fs-06b8137f.efs.us-east-2.amazonaws.com:/ /efs nfs4 _netdev 0 0 Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Script to create JUSTLEE schema in Oracle

    Script to create JUSTLEE schema in Oracle Here is the script to create JUSTLEE schema in Oracle. Connect as sysdba user and execute below commands to create JUSTLEE schema create user justlee identified by justlee; alter user justlee quota unlimited on users; grant connect, resource, create session to justlee; conn justlee/justlee; CREATE TABLE Customers (Customer# NUMBER(4), LastName VARCHAR2(10) NOT NULL, FirstName VARCHAR2(10) NOT NULL, Address VARCHAR2(20), City VARCHAR2(12), State VARCHAR2(2), Zip VARCHAR2(5), Referred NUMBER(4), Region CHAR(2), CONSTRAINT customers_customer#_pk PRIMARY KEY(customer#), CONSTRAINT customers_region_ck CHECK (region IN ('N', 'NW', 'NE', 'S', 'SE', 'SW', 'W', 'E')) ); INSERT INTO CUSTOMERS VALUES (1001, 'MORALES', 'BONITA', 'P.O. BOX 651', 'EASTPOINT', 'FL', '32328', NULL, 'SE'); INSERT INTO CUSTOMERS VALUES (1002, 'THOMPSON', 'RYAN', 'P.O. BOX 9835', 'SANTA MONICA', 'CA', '90404', NULL, 'W'); INSERT INTO CUSTOMERS VALUES (1003, 'SMITH', 'LEILA', 'P.O. BOX 66', 'TALLAHASSEE', 'FL', '32306', NULL, 'SE'); INSERT INTO CUSTOMERS VALUES (1004, 'PIERSON', 'THOMAS', '69821 SOUTH AVENUE', 'BOISE', 'ID', '83707', NULL, 'NW'); INSERT INTO CUSTOMERS VALUES (1005, 'GIRARD', 'CINDY', 'P.O. BOX 851', 'SEATTLE', 'WA', '98115', NULL, 'NW'); INSERT INTO CUSTOMERS VALUES (1006, 'CRUZ', 'MESHIA', '82 DIRT ROAD', 'ALBANY', 'NY', '12211', NULL, 'NE'); INSERT INTO CUSTOMERS VALUES (1007, 'GIANA', 'TAMMY', '9153 MAIN STREET', 'AUSTIN', 'TX', '78710', 1003, 'SW'); INSERT INTO CUSTOMERS VALUES (1008, 'JONES', 'KENNETH', 'P.O. BOX 137', 'CHEYENNE', 'WY', '82003', NULL, 'N'); INSERT INTO CUSTOMERS VALUES (1009, 'PEREZ', 'JORGE', 'P.O. BOX 8564', 'BURBANK', 'CA', '91510', 1003, 'W'); INSERT INTO CUSTOMERS VALUES (1010, 'LUCAS', 'JAKE', '114 EAST SAVANNAH', 'ATLANTA', 'GA', '30314', NULL, 'SE'); INSERT INTO CUSTOMERS VALUES (1011, 'MCGOVERN', 'REESE', 'P.O. BOX 18', 'CHICAGO', 'IL', '60606', NULL, 'N'); INSERT INTO CUSTOMERS VALUES (1012, 'MCKENZIE', 'WILLIAM', 'P.O. BOX 971', 'BOSTON', 'MA', '02110', NULL, 'NE'); INSERT INTO CUSTOMERS VALUES (1013, 'NGUYEN', 'NICHOLAS', '357 WHITE EAGLE AVE.', 'CLERMONT', 'FL', '34711', 1006, 'SE'); INSERT INTO CUSTOMERS VALUES (1014, 'LEE', 'JASMINE', 'P.O. BOX 2947', 'CODY', 'WY', '82414', NULL, 'N'); INSERT INTO CUSTOMERS VALUES (1015, 'SCHELL', 'STEVE', 'P.O. BOX 677', 'MIAMI', 'FL', '33111', NULL, 'SE'); INSERT INTO CUSTOMERS VALUES (1016, 'DAUM', 'MICHELL', '9851231 LONG ROAD', 'BURBANK', 'CA', '91508', 1010, 'W'); INSERT INTO CUSTOMERS VALUES (1017, 'NELSON', 'BECCA', 'P.O. BOX 563', 'KALMAZOO', 'MI', '49006', NULL, 'N'); INSERT INTO CUSTOMERS VALUES (1018, 'MONTIASA', 'GREG', '1008 GRAND AVENUE', 'MACON', 'GA', '31206', NULL, 'SE'); INSERT INTO CUSTOMERS VALUES (1019, 'SMITH', 'JENNIFER', 'P.O. BOX 1151', 'MORRISTOWN', 'NJ', '07962', 1003, 'NE'); INSERT INTO CUSTOMERS VALUES (1020, 'FALAH', 'KENNETH', 'P.O. BOX 335', 'TRENTON', 'NJ', '08607', NULL, 'NE'); CREATE TABLE Orders (Order# NUMBER(4), Customer# NUMBER(4), OrderDate DATE NOT NULL, ShipDate DATE, ShipStreet VARCHAR2(18), ShipCity VARCHAR2(15), ShipState VARCHAR2(2), ShipZip VARCHAR2(5), ShipCost NUMBER(4,2), CONSTRAINT orders_order#_pk PRIMARY KEY(order#), CONSTRAINT orders_customer#_fk FOREIGN KEY (customer#) REFERENCES customers(customer#)); INSERT INTO ORDERS VALUES (1000,1005,TO_DATE('31-MAR-09','DD-MON-YY'),TO_DATE('02-APR-09','DD-MON-YY'),'1201 ORANGE AVE', 'SEATTLE', 'WA', '98114' , 2.00); INSERT INTO ORDERS VALUES (1001,1010,TO_DATE('31-MAR-09','DD-MON-YY'),TO_DATE('01-APR-09','DD-MON-YY'), '114 EAST SAVANNAH', 'ATLANTA', 'GA', '30314', 3.00); INSERT INTO ORDERS VALUES (1002,1011,TO_DATE('31-MAR-09','DD-MON-YY'),TO_DATE('01-APR-09','DD-MON-YY'),'58 TILA CIRCLE', 'CHICAGO', 'IL', '60605', 3.00); INSERT INTO ORDERS VALUES (1003,1001,TO_DATE('01-APR-09','DD-MON-YY'),TO_DATE('01-APR-09','DD-MON-YY'),'958 MAGNOLIA LANE', 'EASTPOINT', 'FL', '32328', 4.00); INSERT INTO ORDERS VALUES (1004,1020,TO_DATE('01-APR-09','DD-MON-YY'),TO_DATE('05-APR-09','DD-MON-YY'),'561 ROUNDABOUT WAY', 'TRENTON', 'NJ', '08601', NULL); INSERT INTO ORDERS VALUES (1005,1018,TO_DATE('01-APR-09','DD-MON-YY'),TO_DATE('02-APR-09','DD-MON-YY'), '1008 GRAND AVENUE', 'MACON', 'GA', '31206', 2.00); INSERT INTO ORDERS VALUES (1006,1003,TO_DATE('01-APR-09','DD-MON-YY'),TO_DATE('02-APR-09','DD-MON-YY'),'558A CAPITOL HWY.', 'TALLAHASSEE', 'FL', '32307', 2.00); INSERT INTO ORDERS VALUES (1007,1007,TO_DATE('02-APR-09','DD-MON-YY'),TO_DATE('04-APR-09','DD-MON-YY'), '9153 MAIN STREET', 'AUSTIN', 'TX', '78710', 7.00); INSERT INTO ORDERS VALUES (1008,1004,TO_DATE('02-APR-09','DD-MON-YY'),TO_DATE('03-APR-09','DD-MON-YY'), '69821 SOUTH AVENUE', 'BOISE', 'ID', '83707', 3.00); INSERT INTO ORDERS VALUES (1009,1005,TO_DATE('03-APR-09','DD-MON-YY'),TO_DATE('05-APR-09','DD-MON-YY'),'9 LIGHTENING RD.', 'SEATTLE', 'WA', '98110', NULL); INSERT INTO ORDERS VALUES (1010,1019,TO_DATE('03-APR-09','DD-MON-YY'),TO_DATE('04-APR-09','DD-MON-YY'),'384 WRONG WAY HOME', 'MORRISTOWN', 'NJ', '07960', 2.00); INSERT INTO ORDERS VALUES (1011,1010,TO_DATE('03-APR-09','DD-MON-YY'),TO_DATE('05-APR-09','DD-MON-YY'), '102 WEST LAFAYETTE', 'ATLANTA', 'GA', '30311', 2.00); INSERT INTO ORDERS VALUES (1012,1017,TO_DATE('03-APR-09','DD-MON-YY'),NULL,'1295 WINDY AVENUE', 'KALMAZOO', 'MI', '49002', 6.00); INSERT INTO ORDERS VALUES (1013,1014,TO_DATE('03-APR-09','DD-MON-YY'),TO_DATE('04-APR-09','DD-MON-YY'),'7618 MOUNTAIN RD.', 'CODY', 'WY', '82414', 2.00); INSERT INTO ORDERS VALUES (1014,1007,TO_DATE('04-APR-09','DD-MON-YY'),TO_DATE('05-APR-09','DD-MON-YY'), '9153 MAIN STREET', 'AUSTIN', 'TX', '78710', 3.00); INSERT INTO ORDERS VALUES (1015,1020,TO_DATE('04-APR-09','DD-MON-YY'),NULL,'557 GLITTER ST.', 'TRENTON', 'NJ', '08606', 2.00); INSERT INTO ORDERS VALUES (1016,1003,TO_DATE('04-APR-09','DD-MON-YY'),NULL,'9901 SEMINOLE WAY', 'TALLAHASSEE', 'FL', '32307', 2.00); INSERT INTO ORDERS VALUES (1017,1015,TO_DATE('04-APR-09','DD-MON-YY'),TO_DATE('05-APR-09','DD-MON-YY'),'887 HOT ASPHALT ST', 'MIAMI', 'FL', '33112', 3.00); INSERT INTO ORDERS VALUES (1018,1001,TO_DATE('05-APR-09','DD-MON-YY'),NULL,'95812 HIGHWAY 98', 'EASTPOINT', 'FL', '32328', NULL); INSERT INTO ORDERS VALUES (1019,1018,TO_DATE('05-APR-09','DD-MON-YY'),NULL, '1008 GRAND AVENUE', 'MACON', 'GA', '31206', 2.00); INSERT INTO ORDERS VALUES (1020,1008,TO_DATE('05-APR-09','DD-MON-YY'),NULL,'195 JAMISON LANE', 'CHEYENNE', 'WY', '82003', 2.00); CREATE TABLE Publisher (PubID NUMBER(2), Name VARCHAR2(23), Contact VARCHAR2(15), Phone VARCHAR2(12), CONSTRAINT publisher_pubid_pk PRIMARY KEY(pubid)); INSERT INTO PUBLISHER VALUES(1,'PRINTING IS US','TOMMIE SEYMOUR','000-714-8321'); INSERT INTO PUBLISHER VALUES(2,'PUBLISH OUR WAY','JANE TOMLIN','010-410-0010'); INSERT INTO PUBLISHER VALUES(3,'AMERICAN PUBLISHING','DAVID DAVIDSON','800-555-1211'); INSERT INTO PUBLISHER VALUES(4,'READING MATERIALS INC.','RENEE SMITH','800-555-9743'); INSERT INTO PUBLISHER VALUES(5,'REED-N-RITE','SEBASTIAN JONES','800-555-8284'); CREATE TABLE Author (AuthorID VARCHAR2(4), Lname VARCHAR2(10), Fname VARCHAR2(10), CONSTRAINT author_authorid_pk PRIMARY KEY(authorid)); INSERT INTO AUTHOR VALUES ('S100','SMITH', 'SAM'); INSERT INTO AUTHOR VALUES ('J100','JONES','JANICE'); INSERT INTO AUTHOR VALUES ('A100','AUSTIN','JAMES'); INSERT INTO AUTHOR VALUES ('M100','MARTINEZ','SHEILA'); INSERT INTO AUTHOR VALUES ('K100','KZOCHSKY','TAMARA'); INSERT INTO AUTHOR VALUES ('P100','PORTER','LISA'); INSERT INTO AUTHOR VALUES ('A105','ADAMS','JUAN'); INSERT INTO AUTHOR VALUES ('B100','BAKER','JACK'); INSERT INTO AUTHOR VALUES ('P105','PETERSON','TINA'); INSERT INTO AUTHOR VALUES ('W100','WHITE','WILLIAM'); INSERT INTO AUTHOR VALUES ('W105','WHITE','LISA'); INSERT INTO AUTHOR VALUES ('R100','ROBINSON','ROBERT'); INSERT INTO AUTHOR VALUES ('F100','FIELDS','OSCAR'); INSERT INTO AUTHOR VALUES ('W110','WILKINSON','ANTHONY'); CREATE TABLE Books (ISBN VARCHAR2(10), Title VARCHAR2(30), PubDate DATE, PubID NUMBER (2), Cost NUMBER (5,2), Retail NUMBER (5,2), Discount NUMBER (4,2), Category VARCHAR2(12), CONSTRAINT books_isbn_pk PRIMARY KEY(isbn), CONSTRAINT books_pubid_fk FOREIGN KEY (pubid) REFERENCES publisher (pubid)); INSERT INTO BOOKS VALUES ('1059831198','BODYBUILD IN 10 MINUTES A DAY',TO_DATE('21-JAN-05','DD-MON-YY'),4,18.75,30.95, NULL, 'FITNESS'); INSERT INTO BOOKS VALUES ('0401140733','REVENGE OF MICKEY',TO_DATE('14-DEC-05','DD-MON-YY'),1,14.20,22.00, NULL, 'FAMILY LIFE'); INSERT INTO BOOKS VALUES ('4981341710','BUILDING A CAR WITH TOOTHPICKS',TO_DATE('18-MAR-06','DD-MON-YY'),2,37.80,59.95, 3.00, 'CHILDREN'); INSERT INTO BOOKS VALUES ('8843172113','DATABASE IMPLEMENTATION',TO_DATE('04-JUN-03','DD-MON-YY'),3,31.40,55.95, NULL, 'COMPUTER'); INSERT INTO BOOKS VALUES ('3437212490','COOKING WITH MUSHROOMS',TO_DATE('28-FEB-04','DD-MON-YY'),4,12.50,19.95, NULL, 'COOKING'); INSERT INTO BOOKS VALUES ('3957136468','HOLY GRAIL OF ORACLE',TO_DATE('31-DEC-05','DD-MON-YY'),3,47.25,75.95, 3.80, 'COMPUTER'); INSERT INTO BOOKS VALUES ('1915762492','HANDCRANKED COMPUTERS',TO_DATE('21-JAN-05','DD-MON-YY'),3,21.80,25.00, NULL, 'COMPUTER'); INSERT INTO BOOKS VALUES ('9959789321','E-BUSINESS THE EASY WAY',TO_DATE('01-MAR-06','DD-MON-YY'),2,37.90,54.50, NULL, 'COMPUTER'); INSERT INTO BOOKS VALUES ('2491748320','PAINLESS CHILD-REARING',TO_DATE('17-JUL-04','DD-MON-YY'),5,48.00,89.95, 4.50, 'FAMILY LIFE'); INSERT INTO BOOKS VALUES ('0299282519','THE WOK WAY TO COOK',TO_DATE('11-SEP-04','DD-MON-YY'),4,19.00,28.75, NULL, 'COOKING'); INSERT INTO BOOKS VALUES ('8117949391','BIG BEAR AND LITTLE DOVE',TO_DATE('08-NOV-05','DD-MON-YY'),5,5.32,8.95, NULL, 'CHILDREN'); INSERT INTO BOOKS VALUES ('0132149871','HOW TO GET FASTER PIZZA',TO_DATE('11-NOV-06','DD-MON-YY'),4,17.85,29.95, 1.50, 'SELF HELP'); INSERT INTO BOOKS VALUES ('9247381001','HOW TO MANAGE THE MANAGER',TO_DATE('09-MAY-03','DD-MON-YY'),1,15.40,31.95, NULL, 'BUSINESS'); INSERT INTO BOOKS VALUES ('2147428890','SHORTEST POEMS',TO_DATE('01-MAY-05','DD-MON-YY'),5,21.85,39.95, NULL, 'LITERATURE'); CREATE TABLE ORDERITEMS ( Order# NUMBER(4), Item# NUMBER(2), ISBN VARCHAR2(10), Quantity NUMBER(3) NOT NULL, PaidEach NUMBER(5,2) NOT NULL, CONSTRAINT orderitems_pk PRIMARY KEY (order#, item#), CONSTRAINT orderitems_order#_fk FOREIGN KEY (order#) REFERENCES orders (order#) , CONSTRAINT orderitems_isbn_fk FOREIGN KEY (isbn) REFERENCES books (isbn) , CONSTRAINT oderitems_quantity_ck CHECK (quantity > 0) ); INSERT INTO ORDERITEMS VALUES (1000,1,'3437212490',1,19.95); INSERT INTO ORDERITEMS VALUES (1001,1,'9247381001',1,31.95); INSERT INTO ORDERITEMS VALUES (1001,2,'2491748320',1,85.45); INSERT INTO ORDERITEMS VALUES (1002,1,'8843172113',2,55.95); INSERT INTO ORDERITEMS VALUES (1003,1,'8843172113',1,55.95); INSERT INTO ORDERITEMS VALUES (1003,2,'1059831198',1,30.95); INSERT INTO ORDERITEMS VALUES (1003,3,'3437212490',1,19.95); INSERT INTO ORDERITEMS VALUES (1004,1,'2491748320',2,85.45); INSERT INTO ORDERITEMS VALUES (1005,1,'2147428890',1,39.95); INSERT INTO ORDERITEMS VALUES (1006,1,'9959789321',1,54.50); INSERT INTO ORDERITEMS VALUES (1007,1,'3957136468',3,72.15); INSERT INTO ORDERITEMS VALUES (1007,2,'9959789321',1,54.50); INSERT INTO ORDERITEMS VALUES (1007,3,'8117949391',1,8.95); INSERT INTO ORDERITEMS VALUES (1007,4,'8843172113',1,55.95); INSERT INTO ORDERITEMS VALUES (1008,1,'3437212490',2,19.95); INSERT INTO ORDERITEMS VALUES (1009,1,'3437212490',1,19.95); INSERT INTO ORDERITEMS VALUES (1009,2,'0401140733',1,22.00); INSERT INTO ORDERITEMS VALUES (1010,1,'8843172113',1,55.95); INSERT INTO ORDERITEMS VALUES (1011,1,'2491748320',1,85.45); INSERT INTO ORDERITEMS VALUES (1012,1,'8117949391',1,8.95); INSERT INTO ORDERITEMS VALUES (1012,2,'1915762492',2,25.00); INSERT INTO ORDERITEMS VALUES (1012,3,'2491748320',1,85.45); INSERT INTO ORDERITEMS VALUES (1012,4,'0401140733',1,22.00); INSERT INTO ORDERITEMS VALUES (1013,1,'8843172113',1,55.95); INSERT INTO ORDERITEMS VALUES (1014,1,'0401140733',2,22.00); INSERT INTO ORDERITEMS VALUES (1015,1,'3437212490',1,19.95); INSERT INTO ORDERITEMS VALUES (1016,1,'2491748320',1,85.45); INSERT INTO ORDERITEMS VALUES (1017,1,'8117949391',2,8.95); INSERT INTO ORDERITEMS VALUES (1018,1,'3437212490',1,19.95); INSERT INTO ORDERITEMS VALUES (1018,2,'8843172113',1,55.95); INSERT INTO ORDERITEMS VALUES (1019,1,'0401140733',1,22.00); INSERT INTO ORDERITEMS VALUES (1020,1,'3437212490',1,19.95); CREATE TABLE BOOKAUTHOR (ISBN VARCHAR2(10), AuthorID VARCHAR2(4), CONSTRAINT bookauthor_pk PRIMARY KEY (isbn, authorid), CONSTRAINT bookauthor_isbn_fk FOREIGN KEY (isbn) REFERENCES books (isbn), CONSTRAINT bookauthor_authorid_fk FOREIGN KEY (authorid) REFERENCES author (authorid)); INSERT INTO BOOKAUTHOR VALUES ('1059831198','S100'); INSERT INTO BOOKAUTHOR VALUES ('1059831198','P100'); INSERT INTO BOOKAUTHOR VALUES ('0401140733','J100'); INSERT INTO BOOKAUTHOR VALUES ('4981341710','K100'); INSERT INTO BOOKAUTHOR VALUES ('8843172113','P105'); INSERT INTO BOOKAUTHOR VALUES ('8843172113','A100'); INSERT INTO BOOKAUTHOR VALUES ('8843172113','A105'); INSERT INTO BOOKAUTHOR VALUES ('3437212490','B100'); INSERT INTO BOOKAUTHOR VALUES ('3957136468','A100'); INSERT INTO BOOKAUTHOR VALUES ('1915762492','W100'); INSERT INTO BOOKAUTHOR VALUES ('1915762492','W105'); INSERT INTO BOOKAUTHOR VALUES ('9959789321','J100'); INSERT INTO BOOKAUTHOR VALUES ('2491748320','R100'); INSERT INTO BOOKAUTHOR VALUES ('2491748320','F100'); INSERT INTO BOOKAUTHOR VALUES ('2491748320','B100'); INSERT INTO BOOKAUTHOR VALUES ('0299282519','S100'); INSERT INTO BOOKAUTHOR VALUES ('8117949391','R100'); INSERT INTO BOOKAUTHOR VALUES ('0132149871','S100'); INSERT INTO BOOKAUTHOR VALUES ('9247381001','W100'); INSERT INTO BOOKAUTHOR VALUES ('2147428890','W105'); CREATE TABLE promotion (Gift varchar2(15), Minretail number(5,2), Maxretail number(5,2)); INSERT into promotion VALUES ('BOOKMARKER', 0, 12); INSERT into promotion VALUES ('BOOK LABELS', 12.01, 25); INSERT into promotion VALUES ('BOOK COVER', 25.01, 56); INSERT into promotion VALUES ('FREE SHIPPING', 56.01, 999.99); COMMIT; CREATE TABLE acctmanager (amid VARCHAR2(4) PRIMARY KEY, amfirst VARCHAR2(12) NOT NULL, amlast VARCHAR2(12) NOT NULL, amedate DATE DEFAULT SYSDATE, region CHAR(2) NOT NULL); CREATE TABLE acctmanager2 (amid CHAR(4), amfirst VARCHAR2(12) NOT NULL, amlast VARCHAR2(12) NOT NULL, amedate DATE DEFAULT SYSDATE, region CHAR(2), CONSTRAINT acctmanager2_amid_pk PRIMARY KEY (amid), CONSTRAINT acctmanager2_region_ck CHECK (region IN ('N', 'NW', 'NE', 'S', 'SE', 'SW', 'W', 'E'))); commit; Related Posts Heading 2 Add paragraph text. Click “Edit Text” to customize this theme across your site. You can update and reuse text themes.

  • Miscellaneous | DBA Genesis Support

    Miscellaneous DBCA Does Not Display ASM Disk Groups In 12cR2 I was installing Oracle 12c R2 with ASM but somehow the DBCA did not list ASM diskgroups. After struggling for hours, I could find a... Oracle Database Cold Backup & Recovery Oracle Cold Database backup is rarely used these days. DBAs hardly take cold database backup but sometimes it is important to take one... Physical Oracle Database Limits When you try to add data files or resize existing data files, you cannot go with any number in your mind. Oracle database has limitations... Difference between 12cR1 and 12cR2 Multitenant database There are lot of new features introduced by Oracle in 12c Release 2 version when compared to 12c Release 1. Here are some of the... Oracle 11g to 12c Rolling Upgrade A rolling upgrade allows you to perform database upgrade without having any noticeable downtime to the end users. There are multiple ways... Control File and Redolog File Multiplexing Control file and Redolog file contains crucial database information and loss of these files will lead to loss of important data about... Deinstall Oracle Software This article describes how to remove oracle software from a Linux server. There are different methods with which you can remove the... Manual Database Creation It’s always a good ideas to create Oracle database using DBCA. This method of creating Oracle database is outdated but you must also know... Oracle Database Health Check Daily DB Health Checks Below are daily checks which are to be performed by a DBA: Check all instance, listener are up and running SELECT... Oracle Database Health Check Daily DB Health Checks Below are daily checks which are to be performed by a DBA: Check all instance, listener are up and running SELECT... Oracle Table and Tables Cluster In this article we will creating a table cluster inside Oracle. We will also be creating couple of tables inside the cluster table.... Drop all schema objects The below script will drop all the objects owned by a schema. This will not delete the user but only deletes the objects SET SERVEROUTPUT...

Search Results

bottom of page