Oracle 101

Oracle Configuration

Information here is useful for all Oracle databases ; some examples are geared toward Oracle 8 on Solaris, AIX, and Linux systems.



Back to top page









email ... js99@rocket99.com

Copyright 1998-2006 © Citisoft, Inc. All Rights Reserved.












Environment

The oracle_home variable specifies the base directory in which Oracle has been installed. The oracle_sid variable contains the name of the Oracle instance, usually three or four characters. Examples:
ORACLE_HOME=/apps/oracle
ORACLE_SID=DB20

Initialization File

The initialization file contains information which is needed during the startup of the Oracle instance.
initialization file - initxx.ora

DB_NAME = DB40
CONTROL_FILES = (/data/disk50/cfile1.con, /data/disk51/cfile2.con)
DB_BLOCK_SIZE = 4096
DB_BLOCK_BUFFERS = 1000
SHARED_POOL_SIZE = 20000000
DML_LOCKS = 200
LOG_BUFFER = 65536
PROCESSES = 100
ROLLBACK_SEGMENTS = (rbs01)
LICENSE_MAX_SESSIONS = 100
LICENSE_SESSION_WARNING = 90
LICENSE_MAX_USERS = 100

SQL*Net

SQL*Net is the client component which allows PCs to access Oracle instances.


Server Manager

Server manager is a command line utility which is used to administer the Oracle instance.
# To start Server Manager (Unix):
svmgrl

# Connecting to an Oracle instance, within Server Manager
> CONNECT internal/oracle sysdba

# Starting up an Oracle instance, without mounting the database
STARTUP NOMOUNT PFILE=initXX.ora

# Starting an Oracle instance
> STARTUP OPEN PFILE=/data/config/initd50.ora

# Shutting down an Oracle instance, immediately
> SHUTDOWN IMMEDIATE

# Shutting down an Oracle instance, and allowing pending
# transactions to commit
> SHUTDOWN TRANSACTIONAL


O/S Password Authentication

In previous versions of Oracle, the "internal" or super-user account was limited to one login / password. With Oracle 8, several IDs can be granted the internal profile's capabilities. Using O/S password authentication is necessary as internal users may need to get into server manager, and the instance may not be online.
orapwd
Usage: orapwd file= password= entries=
      where
        file - name of password file (mand),
        password - password for SYS and INTERNAL (mand),
        entries - maximum number of distinct DBAs and OPERs (opt),
      There are no spaces around the equal-to (=) character.

GRANT SYSDBA TO scott

In the inixx.ora file:
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

/* view accounts using external authentication */
SELECT name, sysdba, sysoper FROM V$PWFILE_USERS


Database Creation

Two examples are detailed below.

--
-- Example #1
--

SPOOL output.log

STARTUP NOMOUNT PFILE=initXX.ora

CREATE DATABASE XX
        MAXLOGFILES 3
        MAXLOGMEMBERS 2
        MAXDATAFILES 100
        MAXLOGHISTORY 1
        MAXINSTANCES 1
LOGFILE
        GROUP 1 ('/data/disk1/log1a.rdo','/data/disk2/log1b.rdo') SIZE 10 M,
        GROUP 1 ('/data/disk3/log2a.rdo','/data/disk3/log2b.rdo') SIZE 10 M
DATAFILE
        '/data/disk8/system01.dbf' SIZE 50 M AUTOEXTEND ON,
        '/data/disk9/filexx01.dbf' SIZE 1000 M AUTOEXTEND ON,
        '/data/disk10/filexx02.dbf' SIZE 1000 M AUTOEXTEND ON,
        '/data/disk11/sortxx99.dbf' SIZE 100 M
CHARACTER SET WE8ISO8859P1 ;

SPOOL OFF


--
-- Example #2
--

connect internal/oracle

SPOOL output.log

STARTUP NOMOUNT PFILE=/apps/oracle/admin/DB19/pfile/init.ora

CREATE DATABASE DB19
        MAXLOGFILES 4
        MAXLOGMEMBERS 2
        MAXDATAFILES 100
        MAXLOGHISTORY 1
        MAXINSTANCES 1
LOGFILE
        GROUP 1 ('/data1/oradata/DB19/log1a.rdo','/data1/oradata/DB19/log1b.rdo') SIZE 20 M,
        GROUP 2 ('/data1/oradata/DB19/log2a.rdo','/data1/oradata/DB19/log2b.rdo') SIZE 20 M,
        GROUP 3 ('/data1/oradata/DB19/log3a.rdo','/data1/oradata/DB19/log3b.rdo') SIZE 20 M,
        GROUP 4 ('/data1/oradata/DB19/log4a.rdo','/data1/oradata/DB19/log4b.rdo') SIZE 20 M
DATAFILE
        '/data1/oradata/DB19/system01.dbf' SIZE 100 M AUTOEXTEND ON,
        '/data1/oradata/DB19/system02.dbf' SIZE 100 M AUTOEXTEND ON,
        '/data1/oradata/DB19/system03.dbf' SIZE 100 M AUTOEXTEND ON
CHARACTER SET WE8ISO8859P1 ;


SPOOL OFF


**************************************************
Next Steps:

2) The following SQL scripts should be run, as internal, following 
   the DB creation:  

   catalog.sql    catproc.sql   dbmspool.sql   
   catblock.sq    catparr.sql   pupbld.sql

   See oracle/rdbms/admin directory.


3) Creation of temp (sort) tablespace


CREATE TABLESPACE temp01 
DATAFILE  '/data1/oradata/DB19/temp01.dbf'
SIZE 500 M
TEMPORARY
ONLINE ;


4) Creation of rollback segments

   (See section on RBS for examples)





Tablespace Creation

The examples below create tablespaces for permanent data tables and temporary sort tables.

CREATE TABLESPACE ts01
DATAFILE '/data/disk9/filexx01.dbf' SIZE 1000M
MINIMUM EXTENT 500K
DEFAULT STORAGE (
  INITIAL 500K
  NEXT 500K
  MINEXTENTS 1
  MAXEXTENTS 200
  PCTINCREASE 0 )
PERMANENT
ONLINE ;

CREATE TABLESPACE sort99
DATAFILE '/data/disk11/sortxx99.dbf' SIZE 100M
MINIMUM EXTENT 1M
DEFAULT STORAGE (
INITIAL 2M
NEXT 2M
MAXEXTENTS 40
PCTINCREASE 0)
TEMPORARY
ONLINE ;


CREATE TABLESPACE ts01
DATAFILE '/data/disk9/filexx01.dbf' SIZE 1000M
MINIMUM EXTENT 500K
DEFAULT STORAGE (
  INITIAL 500K
  NEXT 500K
  MINEXTENTS 1
  MAXEXTENTS 200
  PCTINCREASE 0  )
PERMANENT
ONLINE ;


/* auto extend */

CREATE TABLESPACE prod_idx_01
DATAFILE '/u3/oradata/prod/prod_idx_01.dbf'
SIZE 300M
AUTOEXTEND ON NEXT 10M
MINIMUM EXTENT 320K
DEFAULT STORAGE (
  INITIAL 320K
  NEXT 320K
  MINEXTENTS 1
  MAXEXTENTS UNLIMITED
  PCTINCREASE 0
  )
PERMANENT
ONLINE ;



Increasing the size of a tablespace



1) Add a data file

ALTER TABLESPACE ts03
ADD DATAFILE
'/data1/oradata/ts03-b.dbf'
SIZE 200M ;

ALTER TABLESPACE ts01
ADD DATAFILE
'/data/disk12/dataxx44.dbf' SIZE 200M
AUTOEXTEND ON NEXT 10M
MAXSIZE 500M;


2) or, resize current data file

alter database datafile '/u01/oradata/orcl/users01.dbf' resize 50M; 



3) or, change the storage / extent configuration

ALTER TABLESPACE ts01 
DEFAULT STORAGE (
  INITIAL 200M NEXT 200M MAXEXTENTS 10 ) ;




/* Altering the default storage for a tablespace */

ALTER TABLESPACE TEMP01 DEFAULT STORAGE (PCTINCREASE 0)

ALTER TABLESPACE TEMP01 DEFAULT STORAGE ( initial 1M  next 1M )





Rollback segments

Two rollback segments are created per database. Below are a few handy examples for creating and changing rollback segments. Note that for some Oracle upgrades, you may need to extend the system rollback segment ( or increase its next extent size ).

/* example 1 */

CREATE ROLLBACK SEGMENT rbs01
TABLESPACE ts22
STORAGE (
INITIAL 100K NEXT 100K OPTIMAL 4M
MINEXTENTS 20 MAXEXTENTS 100) ;

ALTER ROLLBACK SEGMENT rbs01 ONLINE ;


/* example 2 */


CREATE ROLLBACK SEGMENT R05
TABLESPACE rb5
STORAGE ( INITIAL 100K NEXT 100K MAXEXTENTS 300 ) ;

ALTER ROLLBACK SEGMENT R05 ONLINE ;
   

Note:

Add this line to the initXX.ora file, to ensure the
rollback segments are brought online after each server restart:

      rollback_segments=(r01,r02,r03,r04)


Altering a Rollback Segment

ALTER ROLLBACK SEGMENT r03
STORAGE ( MAXEXTENTS 250 );


ALTER ROLLBACK SEGMENT r05 SHRINK TO 10M ;



/* example3, includes tablespace creates */


connect internal/oracle

CREATE TABLESPACE rbs01
DATAFILE '/data1/oradata/DB19/rbs01.dbf' SIZE 100M
MINIMUM EXTENT 10K
DEFAULT STORAGE (
  INITIAL 20K
  NEXT 500K
  MINEXTENTS 1
  MAXEXTENTS UNLIMITED
  PCTINCREASE 0  )
PERMANENT
ONLINE ;

CREATE ROLLBACK SEGMENT rbs01
TABLESPACE rbs01
STORAGE (
INITIAL 100K NEXT 100K OPTIMAL 10M
MINEXTENTS 20 MAXEXTENTS 999) ;

ALTER ROLLBACK SEGMENT rbs01 ONLINE ;



CREATE TABLESPACE rbs02
DATAFILE '/data1/oradata/DB19/rbs02.dbf' SIZE 100M
MINIMUM EXTENT 10K
DEFAULT STORAGE (
  INITIAL 20K
  NEXT 500K
  MINEXTENTS 1
  MAXEXTENTS UNLIMITED
  PCTINCREASE 0  )
PERMANENT
ONLINE ;

CREATE ROLLBACK SEGMENT rbs02
TABLESPACE rbs02
STORAGE (
INITIAL 100K NEXT 100K OPTIMAL 10M
MINEXTENTS 20 MAXEXTENTS 999) ;

ALTER ROLLBACK SEGMENT rbs02 ONLINE ;


CREATE TABLESPACE rbs03
DATAFILE '/data1/oradata/DB19/rbs03.dbf' SIZE 100M
MINIMUM EXTENT 10K
DEFAULT STORAGE (
  INITIAL 20K
  NEXT 500K
  MINEXTENTS 1
  MAXEXTENTS UNLIMITED
  PCTINCREASE 0  )
PERMANENT
ONLINE ;

CREATE ROLLBACK SEGMENT rbs03
TABLESPACE rbs03
STORAGE (
INITIAL 100K NEXT 100K OPTIMAL 10M
MINEXTENTS 20 MAXEXTENTS 999) ;

ALTER ROLLBACK SEGMENT rbs03 ONLINE ;






Adding a redo log file set


ALTER DATABASE
  ADD LOGFILE
   ('/data/disk77/log3a.rdo', '/data/disk78/log3b.rdo')
   SIZE 500K;


ALTER DATABASE ADD LOGFILE MEMBER
  '/data/disk77/log3a.rdo' TO GROUP 1, 
  '/data/disk78/log3b.rdo' TO GROUP 2;


ALTER DATABASE
  ADD LOGFILE
   '/data2/oradata/MARSH/redo14.log'
   SIZE 10M;


select group#,thread#,archived,status from v$log ;

select * from v$logfile ;

ALTER SYSTEM SWITCH LOGFILE ;

ALTER DATABASE DROP LOGFILE '/data/OraHome1/oradata/TEST/redo01.log'
ALTER DATABASE DROP LOGFILE '/data/OraHome1/oradata/TEST/redo02.log'



Table Creation

This section illustrates the creation of a standard table (invoice) and a table which contains a blob column (document).

CREATE TABLE
   INVOICE (
     INV_ID   NUMBER(4),
     INV_DATE DATE)
  PCTFREE 20
  PCTUSED 50
STORAGE (
   INITIAL 200K NEXT 200K
   PCTINCREASE 0 MAXEXTENTS 50 )
TABLESPACE ts01
LOGGING ;


CREATE TABLE documents
 ( doc_id       INTEGER
   doc_text     BLOB)
STORAGE (INITIAL 256 NEXT 256)
LOB (doc_text) STORE AS
   (TABLESPACE ts04
    STORAGE (INITIAL 6144 NEXT 6144)
    CHUNK 4
    NOCACHE NOLOGGING
       INDEX (TABLESPACE ts05
       STORAGE (INITIAL 256 NEXT 256)
             )
   );


/* index-organized table, similar to Sybase's clustered index construct */

CREATE TABLE postal_customer
( zip     char(5),
  addr    numeric(5),
  dir     char(1),
  street  varchar2(20),
  last_name  varchar2(20),
  first_name varchar2(20),
  item_ct    numeric(3)
)
CONSTRAINT pk_postal PRIMARY KEY (zip,addr,dir,street)
ORGANIZATION INDEX 
TABLESPACE ts05
PCTTHRESHOLD 25
OVERFLOW TABLESPACE ts06 ;


/* cloning a table without data */

create table city2 as select * from city where 1=2 ;

create table brand2 as select * from brand where 1=2 ;





Altering table storage


alter table employee move
 STORAGE (
    INITIAL 200K
    NEXT 200K
    PCTINCREASE 0 
    MAXEXTENTS 50 )
 TABLESPACE PROD_DATA_04 ;



# Alter table:  modifying a column

ALTER TABLE employee MODIFY (last_name varchar2(35));




Backing up data files

A "cold" backup is the most widely used method for backing up Oracle data.

ALTER TABLESPACE ts1 BEGIN BACKUP;

   << backup files, within O/S >>

ALTER TABLESPACE ts1 END BACKUP;

/* control file backup */

ALTER DATABASE BACKUP CONTROLFILE TO 'filename' REUSE;


Taking tablespaces offline

To perform a "hot backup", each tablespace needs to put offline, and backed up using OS commands.


/* prepare for offline tablespace backup */

ALTER SYSTEM ARCHIVE LOG CURRENT;


/* take tablespace offline */

ALTER TABLESPACE users OFFLINE NORMAL;

/* run backup for datafiles */

/* bring tablespace online */

ALTER TABLESPACE users ONLINE;


Creating users


/* create profile */

CREATE PROFILE pr01
SESSIONS_PER_USER 1
IDLE_TIME 30
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LIFE_TIME 60
PASSWORD_VERIFY_FUNCTION SYSUTILS.VERIF_PWD ;

PROFILE = name of the profile
SESSIONS_PER_USER = limits a user to integer concurrent sessions
CPU_PER_SESSION = limits the CPU time for a session, in hundredth of seconds  
CPU_PER_CALL = limits the CPU time for a call
CONNECT_TIME =  limits the total elapsed time of a session, in minutes
IDLE_TIME = limits periods of continuous inactive time during a session, in minutes
LOGICAL_READS_PER_SESSION = number of data blocks read in a session
LOGICAL_READS_PER_CALL = number of data blocks read for a call to process a SQL stmt
FAILED_LOGIN_ATTEMPTS = number of failed attempts to log in, before locking acct
PASSWORD_LIFE_TIME = limits the number of days the same password can be used
PASSWORD_REUSE_TIME = number of days before which a password cannot be reused
PASSWORD_REUSE_MAX = number of password changes required
PASSWORD_LOCK_TIME = number of days an account will be locked
PASSWORD_VERIFY_FUNCTION = a PL/SQL password complexity verification script
DEFAULT = omits a limit for this resource in this profile
COMPOSITE_LIMIT = specifies the total resources cost for a session, in service units
UNLIMITED = a user assigned this profile can use an unlimited amount of this resource


/* create user */

CREATE USER scott
IDENTIFIED BY tiger
DEFAULT TABLESPACE ts01
QUOTA 500M ON ts02
PASSWORD EXPIRE
ACCOUNT UNLOCK
PROFILE pr01 ;


/* user pwd change */

ALTER USER scott
IDENTIFIED BY lion ;

/* add tablespace */
ALTER USER scott
QUOTA 100M ON ts29 ;


/* user drop */

DROP user scott ;



Import and Export

The import and export utilities are used to move data from one instance to another, or one schema to another. The utility basically re-reruns all the DDL, in order, to recreate and fill the tables.

/* Export sample call */

exp username/password feedback=100 file=filename.dat log=export.log

exp help=y

Keyword  Description (Default)        Keyword      Description (Default)
--------------------------------------------------------------------------
USERID   username/password            FULL         export entire file (N)
BUFFER   size of data buffer          OWNER        list of owner usernames
FILE     output files (EXPDAT.DMP)    TABLES       list of table names
COMPRESS import into one extent (Y)   RECORDLENGTH length of IO record
GRANTS   export grants (Y)            INCTYPE      incremental export type
INDEXES  export indexes (Y)           RECORD       track incr. export (Y)
ROWS     export data rows (Y)         PARFILE      parameter filename
CONSTRAINTS export constraints (Y)    CONSISTENT   cross-table consistency
LOG      log file of screen output    STATISTICS   analyze objects (ESTIMATE)
DIRECT   direct path (N)              TRIGGERS     export triggers (Y)
FEEDBACK display progress every x rows (0)
FILESIZE maximum size of each dump file
QUERY    select clause used to export a subset of a table
VOLSIZE  number of bytes to write to each tape volume

The following keywords only apply to transportable tablespaces
TRANSPORT_TABLESPACE export transportable tablespace metadata (N)
TABLESPACES list of tablespaces to transport



/* Import sample calls */

imp username/password feedback=100 file=filename.dat log=import.log

/* import a single table, table exists already */
imp admin23/portal feedback=100 file=company3.dat log=company.log tables='(company)' ignore=Y

imp help=y

USERID   username/password           FULL     import entire file (N)
BUFFER   size of data buffer         FROMUSER     list of owner usernames
FILE     output file (EXPDAT.DMP)    TOUSER       list of usernames
SHOW     just list file contents (N) TABLES       list of table names
IGNORE   ignore create errors (N)    RECORDLENGTH length of IO record
GRANTS   import grants (Y)           INCTYPE      incremental import type
INDEXES  import indexes (Y)          COMMIT    commit array insert (N)
ROWS     import data rows (Y)        PARFILE      parameter filename
LOG      log file of screen output

DESTROY                  overwrite tablespace data file (N)
INDEXFILE                write table/index info to specified file
CHARSET                  character set of export file (NLS_LANG)
POINT_IN_TIME_RECOVER    Tablespace Point-in-time Recovery (N)
SKIP_UNUSABLE_INDEXES    skip maintenance of unusable indexes (N)
ANALYZE                  execute ANALYZE statements in dump file (Y)
FEEDBACK                 display progress every x rows(0)




Bonus Script

#====================================================================
#!/usr/bin/ksh
#
# exp2imp:  export Oracle table for import to secondary schema
#
# Parms:  user1 table user2
#
#
#====================================================================

dbauser=oradba/dba@prod
tname=$2
user1=$1
user2=$3
tparm2="tables='$tname'"
tparm1="tables='$user1.$tname'"


echo "====================="
echo "Exporting $tparm1 ..."

exp $dbauser  \
 feedback=100 \
 file=$tname.dat \
 $tparm1 \
 direct=Y \
 log=/tmp/export.log


echo "====================="
sleep 5
echo "Importing $tparm2 ..."

imp $dbauser \
 fromuser=$user1 \
 touser=$user2 \
 feedback=100  \
 file=$tname.dat \
 log=/tmp/import.log \
 $tparm2 \
 indexes=N \
 ignore=Y


echo "========================================================="
echo `date` Done.
echo "========================================================="








SQL Loader

Use this high performance utility for table-based imports. Drop the table's indexes before the load, and rebuild them afterwards. Drawback with this utility: you'll need to build a control file for each table, not a trivial process.

# command sample: uses "direct path" which bypasses redo

sqlldr userid=scott/tiger data=/apps/temp/customer.dat control=customer.ctl direct=true

# list parameters

sqlldr help=Y

userid          ORACLE username/password
control         Control file name
log             Log file name
bad             Bad file name
data            Data file name
discard         Discard file name
discardmax      Number of discards to allow
skip            Number of logical records to skip
load            Number of logical records to load
errors          Number of errors to allow
rows            Number of rows in conventional path bind array or between direct path data saves
bindsize        Size of conventional path bind array in bytes
silent          Suppress messages during run (header,feedback,errors,discards,partitions)
direct          use direct path
_synchro        internal testing
parfile         parameter file: name of file that contains parameter specifications
parallel        do parallel load
file            File to allocate extents from
skip_unusable_indexes           disallow/allow unusable indexes or index partitions
skip_index_maintenance          do not maintain indexes, mark affected indexes as unusable
commit_discontinued             commit loaded rows when load is discontinued
_display_exitcode               Display exit code for SQL*Loader execution
readsize                        Size of Read buffer




Using SQL Loader

1)  create a control file:

LOAD DATA
INFILE 'places.txt'
INTO TABLE places
INSERT
FIELDS TERMINATED BY X'09' TRAILING NULLCOLS
(
loc_id,
name,
city,
phone
)

2) make sure the destination table is empty

3) run the command, as the oracle user:

sqlldr userid=user99/tiger control=places.ctl direct=true skip_index_maintenance=true readsize=131070


This command will run SQL loader, using 'direct path', and skip index block updates -
the indexes will need to be rebuilt later, along with the analyze command.

Direct path - skips constraints, along with redo - writes are direct, no commit.

Sample control, importing Sybase dates


LOAD DATA
INFILE 'company.dat'
INTO TABLE company
INSERT
FIELDS TERMINATED BY X'09' TRAILING NULLCOLS
(
COMPANY_ID          ,
COUNTRY_ID          ,
INDUSTRY_ID         ,
INSERT_DT     date "mon dd yyyy hh:miam" terminated by X'09',
MOD_DT        date "mon dd yyyy hh:miam" terminated by X'09',
ENDING_EFF_DT date "mon dd yyyy hh:miam" terminated by X'09',
TRADE_COUNTRY_ID    ,
COMPANY_NAME        ,
DATA_ID             )

Exporting to a delimited file

Note: blob/long raw/long field types are not exportable using this technique !

Option 1:  Select as file format

set feedback off
set pages 0    
spool file2.dat
select col_a||','||col_b||','||to_char(col_c) from tablename ;
spool off



Option 2: Use CTL file maker and SQL Loader
Install the three scripts detailed below, then run as follows:

(sample calls)

# export all tables, to ctl files
exp2ctl scott/tiger /export/home/dba/temp

cd /export/home/dba/temp

# import using sql loader, to different schema ( or instance! )
ctl2ora steve/lion


'sqlldr_exp' script from Oracle site (author: T.Kyte).
This script is called from the following two scripts.
Note: left justify all commands if using paste function !

              #!/usr/bin/ksh 
              # Script: sqlldr_exp
              # export single oracle table to a ctl file,
              # for use with sqlldr

              if [ "$1" = "" ]
              then
                 cat << EOF
              usage:sqlldr_exp un/pw [tables|views]

              example:    sqlldr_exp scott/tiger mytable

              description:    Select over standard out all rows of table or view with 
                              columns delimited by pipes.
              EOF
                 exit
              fi

              PW=$1
              tname=$2
              shift

              for X in $*
              do
              sqlplus -s $PW << EOF > /tmp/flat$$.sql
              set wrap off
              set feedback off
              set pagesize 0
              set verify off


              prompt prompt LOAD DATA
              prompt prompt INFILE * 
              prompt prompt INTO TABLE $X 
              prompt prompt TRUNCATE
              prompt prompt FIELDS TERMINATED BY '|'
              prompt prompt (
              select  'prompt ' || decode(column_id,1,'',',') || lower(column_name)
              from    user_tab_columns
              where   table_name = upper('$X') 
              /
              prompt prompt )
              prompt prompt BEGINDATA


              prompt  select
              select  lower(column_name)||'||chr(124)||'
              from    user_tab_columns
              where   table_name = upper('$X') and
                  column_id != (select max(column_id) from user_tab_columns where
               table_name = upper('$X'))
              /
              select  lower(column_name)
              from    user_tab_columns
              where   table_name = upper('$X') and
                  column_id = (select max(column_id) from user_tab_columns where
               table_name = upper('$X'))
              /
              prompt  from    $X
              prompt  /
              prompt  exit
              exit
              EOF

              sqlplus -s $PW << EOF > $tname.ctl
              set wrap off
              set feedback off
              set pagesize 0
              set verify off
              set trimspool on
              set linesize 5000
              start /tmp/flat$$.sql
              exit
              EOF


              rm /tmp/flat$$.sql

              done



Bonus Script I: Create control files for each table 
                in a schema.

#!/usr/bin/ksh
#==========================================
# Script: exp2ctl
# Export all tables to control files, 
# for sqlldr import
#
# Parms:  userid/pwd
#         output directory
#
# ** Check location of sqlldr_exp script, modify call as needed (expcmd)
#==========================================

orauser=$1
dir1=$2

if ! test -d $dir1 ; then
  echo "Directory not found: " $dir1
  echo " "
  echo " "
  echo " "
  exit
fi

shfile=$dir1/ctlmake.sh
awkfile=$dir1/ctlmake.awk
tfile=$dir1/tnames.dat
expcmd=sqlldr_exp

echo "Using schema ................" $orauser
echo "Using output directory ......" $dir1
sleep 2

echo "Creating table list ..."

sqlplus -s $orauser @user_tables.sql | egrep "[A-Z]" > $tfile
sleep 2


echo "Creating script ..."
echo "cd $dir1" > $shfile
echo "set -x"   >> $shfile
echo "{print \"$expcmd\" \" $orauser \" \$1  }" > $awkfile
cat $dir1/tnames.dat | awk -f $dir1/ctlmake.awk >> $shfile
chmod 700 $shfile
sleep 2
cat $shfile
sleep 3


echo "Running ctl make ..."
$shfile



Bonus Script II:  Run in all ctl files, in the current directory
                  ( file must end in .ctl )

#!/bin/ksh
#-----------------------------
#  Script: ctl2ora
#  Control File Processor
#  for SQL Loader
#  Parms:  userid/pwd
#-----------------------------
if test $# -lt 1 ; then
   echo " "
   echo "usage: "
   echo $0  ' userid/pwd'
   echo " "
   exit
fi


for fname in *.ctl ; do
    echo "sqlldr: " $fname 
    if test -f $fname ;  then
      sqlldr userid=$1 control=$fname direct=true readsize=131070
      ## Faster version!
      ## sqlldr userid=$1 control=$fname direct=true skip_index_maintenance=true readsize=131070
    fi
done

Using PL-SQL to Export Data to a File

--
-- This procedure exports 2 columns from a table
-- to a double pipe delimited file.
--
-- Note: long raw / long data fields are not exportable using this technique

create or replace procedure proc_exp_cad (rcount  in  integer) is

count1        integer default 0 ;
file1         utl_file.file_type;
company_id1   integer;
industry_id1  integer;
line1         varchar2(1000);

cursor cur1 is
select  company_id, industry_id
from company_annual_data2 ;


begin

file1 := utl_file.fopen('/apps/temp','cad.out','w',32000);

open cur1;

loop
   count1 := count1 + 1 ;

   fetch cur1 into company_id1, industry_id1 ;
   exit when cur1%NOTFOUND ;
   exit when count1 > rcount ;

   line1 := to_char(company_id1) || '||' ||
           to_char(industry_id1) ;
   
   utl_file.put_line(file1,line1);

end loop ;


utl_file.fclose(file1);


end  ;
/


call proc_exp_cad(30)
/


show errors
/



DB Verify

The DB Verify utiltity is the equivalent of Sybase's DBCC utility. It basically goes through a given data file and checks the consistency, verifying the pointers are all proper. This should be run on a regular basis, to find potential disk problems. If it is not, the end users will likely be the first to notice any problems which may materialize.
# run example

cd /apps/oracle/bin

./dbv /data2/ts23.dbf





Restricted Mode

Instance is available only to users with "restricted session" privilege.

/* startup in restricted mode */   

STARTUP RESTRICT


/* restrict an instance that is already running */

ALTER SYSTEM ENABLE RESTRICTED SESSION






Clearing Log Files

Clearing the redo log files is usually the first remedy used in repairing a damaged instance ; it's the equivalent of dropping and re-adding each file.

SELECT * FROM v$log ;


ALTER DATABASE CLEAR LOGFILE 'filename' ;


/* if archive log mode is true, may need to force the clear */

ALTER DATABASE CLEAR UNARCHIVED LOGFILE 'filename' ;

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 2 ;







Moving Data Files

Illustrated is the method for moving a datafile for an active instance.


1) take the tablespace offline

2) move the file, using mv command

3) ALTER TABLESPACE ts01 
   RENAME DATAFILE '/data/ts01.dbf'
   TO              '/data5/ts01.dbf' ;
   
4) bring the tablespace online





Dropping a Tablespace

Note that dropping a tablespace removes the internal pointers from Oracle ; you still need to remove the file using OS commands.

ALTER TABLESPACE ts01 OFFLINE IMMEDIATE ;

DROP TABLESPACE ts01 INCLUDING CONTENTS ;

/* then, delete data files using O/S commands */



Index Management

Main index types: b-tree, reverse key, and bitmap indexes.

/* unique index, use a low PCTFREE value for ID columns */

CREATE UNIQUE INDEX schema2.idx_invoice
ON schema2.invoice ( inv_id )
PCTFREE 10
STORAGE ( INITIAL 200K 
          NEXT 200K
          PCTINCREASE 0
          MAXEXTENTS 50 )
TABLESPACE idx01 ;


/* non-unique index ;  
   note the NOLOGGING parm, which is recommended for large indexes
   also note the NOSORT option, which is good for large tables where the
   data has been loaded in sorted order.
*/

CREATE UNIQUE INDEX schema2.idx_customer
ON schema2.customer ( stat_cd, name )
PCTFREE 40
STORAGE ( INITIAL 200K 
          NEXT 200K
          PCTINCREASE 0
          MAXEXTENTS 50 )
NOLOGGING          
NOSORT
TABLESPACE idx01 ;


/* index extent increase */

ALTER INDEX schema2.idx_customer
STORAGE ( NEXT 300K ) ;


/* indexes should be rebuilt, when there are many deleted entries */

ALTER INDEX schema2.idx_customer
REBUILD
TABLESPACE idx02 ;


alter index PK_EMPLOYEE rebuild compute statistics nologging ; 



/* update internal stats on the index, for the optimizer ...
  also check for corruption */

ANALYZE INDEX schema2.idx_customer VALIDATE STRUCTURE ;



/* disable a constraint */

ALTER TABLE EMPLOYEE DISABLE CONSTRAINT FK_DEPT ;


/* add a constraint */

alter table  REPORT_LIST
add constraint  PK_REPORT_LIST primary key(REPORT_ID)
using index tablespace prod_idx_01;




Creating a Schema

The steps detailed below depict the creation of a typical schema within an open instance.

create tablespace ts99 datafile '/data4/ts99.dat'
size 200m default storage ( initial 100k next 100k pctincrease 0) ;

create user user99 identified by pwd99
default tablespace ts99
temporary tablespace temp
quota unlimited on ts99 ;

grant create table,resource,connect to r99 ;

grant dba to r99 ;



Enabling Archive Logging

Archive logging is necessary in order to guarantee that a full recovery is possible.


In the init.ora file, add:

LOG_ARCHIVE_DEST = /data5/archive
LOG_ARCHIVE_FORMAT = arch%s.arc
LOG_ARCHIVE_MAX_PROCESSES=2

Shut down the Oracle instance.

In server manager:

> connect
> startup mount
> alter database archivelog
> alter database open
> shutdown immediate


Run a complete cold backup - this sets the starting point 
if a restore is needed.










Back to top page




email ... js99@rocket99.com

Copyright 1998-2006 © Citisoft, Inc. All Rights Reserved.