Data Guard Physical Standby Setup in Oracle Database 11g Release 2

Data Guard Physical Standby Setup in Oracle Database 11g Release 2
Data Guard is the name for Oracle's standby database solution, used for disaster recovery and high availability. This article contains an updated version of the 9i physical standby setup method posted here.
Related articles.
Assumptions
  • You have two servers (physical or VMs) with an operating system and Oracle installed on them. In this case I've used Oracle Linux 5.6 and Oracle Database 11.2.0.2.
  • The primary server has a running instance.
  • The standby server has a software only installation.
Primary Server Setup
Logging
Check that the primary database is in archivelog mode.
SELECT log_mode FROM v$database;

LOG_MODE
------------
NOARCHIVELOG

SQL>
If it is noarchivelog mode, switch is to archivelog mode.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
Enabled forced logging by issuing the following command.
ALTER DATABASE FORCE LOGGING;
Initialization Parameters
Check the setting for the DB_NAME and DB_UNIQUE_NAME parameters. In this case they are both set to "DB11G" on the primary database.
SQL> show parameter db_name

NAME                                     TYPE         VALUE
------------------------------------ ----------- ------------------------------
db_name                              string         DB11G

SQL> show parameter db_unique_name

NAME                                     TYPE         VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                             string         DB11G

SQL>
The DB_NAME of the standby database will be the same as that of the primary, but it must have a different DB_UNIQUE_NAME value. The DB_UNIQUE_NAME values of the primary and standby database should be used in the DG_CONFIG setting of the LOG_ARCHIVE_CONFIG parameter. For this example, the standby database will have the value "DB11G_STBY".
ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB11G,DB11G_STBY)';
Set suitable remote archive log destinations. In this case I'm using the fast recovery area for the local location, but you could specify an location explicitly if you prefer. Notice the SERVICE and the DB_UNIQUE_NAME for the remote location reference the standby location.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
The LOG_ARCHIVE_FORMAT and LOG_ARCHIVE_MAX_PROCESSES parameters must be set to appropriate values and the REMOTE_LOGIN_PASSWORDFILE must be set to exclusive.
ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' SCOPE=SPFILE;
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=30;
ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE;
In addition to the previous setting, it is recommended to make sure the primary is ready to switch roles to become a standby. For that to work properly we need to set the following parameters. Adjust the *_CONVERT parameters to account for your filename and patch differences between the servers.
ALTER SYSTEM SET FAL_SERVER=DB11G_STBY;
--ALTER SYSTEM SET DB_FILE_NAME_CONVERT='DB11G_STBY','DB11G' SCOPE=SPFILE;
--ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='DB11G_STBY','DB11G'  SCOPE=SPFILE;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
Remember, some of the parameters are not modifiable, so the database will need to be restarted before they take effect.
Service Setup
Entries for the primary and standby databases are needed in the "$ORACLE_HOME/network/admin/tnsnames.ora" files on both servers. You can create these using the Network Configuration Utility (netca) or manually. The following entries were used during this setup.
DB11G =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol5-112-dga1)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DB11G.WORLD)
    )
  )

DB11G_STBY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol5-112-dga2)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DB11G.WORLD)
    )
  )
Backup Primary Database
If you are planning to use an active duplicate to create the standby database, then this step is unnecessary. For a backup-based duplicate, or a manual restore, take a backup of the primary database.
$ rman target=/

RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
Create Standby Controlfile and PFILE
Create a controlfile for the standby database by issuing the following command on the primary database.
ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/db11g_stby.ctl';
Create a parameter file for the standby database.
CREATE PFILE='/tmp/initDB11G_stby.ora' FROM SPFILE;
Amend the PFILE making the entries relevant for the standby database. I'm making a replica of the original server, so in my case I only had to amend the following parameters.
*.db_unique_name='DB11G_STBY'
*.fal_server='DB11G'
*.log_archive_dest_2='SERVICE=db11g ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G'
Standby Server Setup (Manual)
Copy Files
Create the necessary directories on the standby server.
$ mkdir -p /u01/app/oracle/oradata/DB11G
$ mkdir -p /u01/app/oracle/fast_recovery_area/DB11G
$ mkdir -p /u01/app/oracle/admin/DB11G/adump
Copy the files from the primary to the standby server.
$ # Standby controlfile to all locations.
$ scp oracle@ol5-112-dga1:/tmp/db11g_stby.ctl /u01/app/oracle/oradata/DB11G/control01.ctl
$ cp /u01/app/oracle/oradata/DB11G/control01.ctl /u01/app/oracle/fast_recovery_area/DB11G/control02.ctl

$ # Archivelogs and backups
$ scp -r oracle@ol5-112-dga1:/u01/app/oracle/fast_recovery_area/DB11G/archivelog /u01/app/oracle/fast_recovery_area/DB11G
$ scp -r oracle@ol5-112-dga1:/u01/app/oracle/fast_recovery_area/DB11G/backupset /u01/app/oracle/fast_recovery_area/DB11G

$ # Parameter file.
$ scp oracle@ol5-112-dga1:/tmp/initDB11G_stby.ora /tmp/initDB11G_stby.ora

$ # Remote login password file.
$ scp oracle@ol5-112-dga1:$ORACLE_HOME/dbs/orapwDB11G $ORACLE_HOME/dbs
Start Listener
Make sure the listener is started on the standby server.
$ lsnrctl start
Restore Backup
Create the SPFILE form the amended PFILE.
$ export ORACLE_SID=DB11G
$ sqlplus / as sysdba

SQL> CREATE SPFILE FROM PFILE='/tmp/initDB11G_stby.ora';
Restore the backup files.
$ export ORACLE_SID=DB11G
$ rman target=/

RMAN> STARTUP MOUNT;
RMAN> RESTORE DATABASE;
Create Redo Logs
Create online redo logs for the standby. It's a good idea to match the configuration of the primary server.
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DB11G/online_redo01.log') SIZE 50M;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DB11G/online_redo02.log') SIZE 50M;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DB11G/online_redo03.log') SIZE 50M;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
In addition to the online redo logs, you should create standby redo logs on both the standby and the primary database (in case of switchovers). The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs. In my case, the following is standby redo logs must be created on both servers.
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo01.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo02.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo03.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo04.log') SIZE 50M;
Once this is complete, we can start the apply process.
Standby Server Setup (DUPLICATE)
Copy Files
Create the necessary directories on the standby server.
$ mkdir -p /u01/app/oracle/oradata/DB11G
$ mkdir -p /u01/app/oracle/fast_recovery_area/DB11G
$ mkdir -p /u01/app/oracle/admin/DB11G/adump
Copy the files from the primary to the standby server.
$ # Standby controlfile to all locations.
$ scp oracle@ol5-112-dga1:/tmp/db11g_stby.ctl /u01/app/oracle/oradata/DB11G/control01.ctl
$ cp /u01/app/oracle/oradata/DB11G/control01.ctl /u01/app/oracle/fast_recovery_area/DB11G/control02.ctl

$ # Parameter file.
$ scp oracle@ol5-112-dga1:/tmp/initDB11G_stby.ora /tmp/initDB11G_stby.ora

$ # Remote login password file.
$ scp oracle@ol5-112-dga1:$ORACLE_HOME/dbs/orapwDB11G $ORACLE_HOME/dbs
Start Listener
When using active duplicate, the standby server requires static listener configuration in a "listener.ora" file. In this case I used the following configuration.
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = DB11G.WORLD)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1)
      (SID_NAME = DB11G)
    )
  )

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol5-112-dga2.localdomain)(PORT = 1521))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ADR_BASE_LISTENER = /u01/app/oracle
Make sure the listener is started on the standby server.
$ lsnrctl start
Create Standby Redo Logs on Primary Server
The DUPLICATE command automatically creates the standby redo logs on the standby. To make sure the primary database is configured for switchover, we must create the standby redo logs on the primary server.
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo01.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo02.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo03.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo04.log') SIZE 50M;
Create Standby Using DUPLICATE
Start the auxillary instance on the standby server by starting it using the temporary "init.ora" file.
$ export ORACLE_SID=DB11G
$ sqlplus / as sysdba

SQL> STARTUP NOMOUNT PFILE='/tmp/initDB11G_stby.ora';
Connect to RMAN, specifying a full connect string for both the TARGET and AUXILLARY instances. DO not attempt to use OS authentication.
$ rman TARGET sys/password@DB11G AUXILIARY sys/password@DB11G_STBY
Now issue the following DUPLICATE command.
DUPLICATE TARGET DATABASE
  FOR STANDBY
  FROM ACTIVE DATABASE
  DORECOVER
  SPFILE
    SET db_unique_name='DB11G_STBY' COMMENT 'Is standby'
    SET LOG_ARCHIVE_DEST_2='SERVICE=db11g ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G'
    SET FAL_SERVER='DB11G' COMMENT 'Is primary'
  NOFILENAMECHECK;
A brief explanation of the individual clauses is shown below.
  • FOR STANDBY: This tells the DUPLICATE command is to be used for a standby, so it will not force a DBID change.
  • FROM ACTIVE DATABASE: The DUPLICATE will be created directly from the source datafile, without an additional backup step.
  • DORECOVER: The DUPLICATE will include the recovery step, bringing the standby up to the current point in time.
  • SPFILE: Allows us to reset values in the spfile when it is copied from the source server.
  • NOFILENAMECHECK: Destination file locations are not checked.
Once the command is complete, we can start the apply process.
Start Apply Process
Start the apply process on standby server.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
If you need to cancel the apply process, issue the following command.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
If you prefer, you can set a delay between the arrival of the archived redo log and it being applied on the standby server using the following commands.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 30 DISCONNECT FROM SESSION;

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY DISCONNECT FROM SESSION;
Test Log Transport
On the primary server, check the latest archived redo log and force a log switch.
ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';

SELECT sequence#, first_time, next_time
FROM   v$archived_log
ORDER BY sequence#;

ALTER SYSTEM SWITCH LOGFILE;
Check the new archived redo log has arrived at the standby server and been applied.
ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';

SELECT sequence#, first_time, next_time, applied
FROM   v$archived_log
ORDER BY sequence#;
Protection Mode
There are three protection modes for the primary database:
  • Maximum Availability: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If no standby location is available, it acts in the same manner as maximum performance mode until a standby becomes available again.
  • Maximum Performance: Transactions on the primary commit as soon as redo information has been written to the online redo log. Transfer of redo information to the standby server is asynchronous, so it does not impact on performance of the primary.
  • Maximum Protection: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If not suitable standby location is available, the primary database shuts down.
By default a newly created standby database is in maximum performance mode.
SELECT protection_mode FROM v$database;

PROTECTION_MODE
--------------------
MAXIMUM PERFORMANCE

SQL>
The mode can be switched using the following commands. Note the alterations in the redo transport attributes.
-- Maximum Availability.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;

-- Maximum Performance.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;

-- Maximum Protection.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION;
ALTER DATABASE OPEN;
Database Switchover
A database can be in one of two mutually exclusive modes (primary or standby). These roles can be altered at runtime without loss of data or resetting of redo logs. This process is known as a Switchover and can be performed using the following statements.
-- Convert primary database to standby
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

-- Shutdown primary database
SHUTDOWN IMMEDIATE;

-- Mount old primary database as standby database
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
On the original standby database issue the following commands.
-- Convert standby database to primary
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

-- Shutdown standby database
SHUTDOWN IMMEDIATE;

-- Open old standby database as primary
STARTUP;
Once this is complete, test the log transport as before. If everything is working fine, switch the primary database back to the original server by doing another switchover. This is known as a switchback.
Failover
If the primary database is not available the standby database can be activated as a primary database using the following statements.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE ACTIVATE STANDBY DATABASE;
Since the standby database is now the primary database it should be backed up immediately.
The original primary database can now be configured as a standby. If Flashback Database was enabled on the primary database, then this can be done relatively easily (shown here). If not, the whole setup process must be followed, but this time using the original primary server as the standby.
Flashback Database
It was already mentioned in the previous section, but it is worth drawing your attention to Flashback Database once more. Although a switchover/switchback is safe for both the primary and standby database, a failover renders the original primary database useless for converting to a standby database. If flashback database is not enabled, the original primary must be scrapped and recreated as a standby database.
An alternative is to enable flashback database on the primary (and the standby if desired) so in the event of a failover, the primary can be flashed back to the time before the failover and quickly converted to a standby database. That process is shown here.
Read-Only Standby and Active Data Guard
Once a standby database is configured, it can be opened in read-only mode to allow query access. This is often used to offload reporting to the standby server, thereby freeing up resources on the primary server. When open in read-only mode, archive log shipping continues, but managed recovery is stopped, so the standby database becomes increasingly out of date until managed recovery is resumed.
To switch the standby database into read-only mode, do the following.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE OPEN READ ONLY;
To resume managed recovery, do the following.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
In 11g, Oracle introduced the Active Data Guard feature. This allows the standby database to be open in read-only mode, but still apply redo information. This means a standby can be available for querying, yet still be up to date. There are licensing implications for this feature, but the following commands show how active data guard can be enabled.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE OPEN READ ONLY;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Since managed recovery continues with active data guard, there is no need to switch back to managed recovery from read-only mode in this case.
Snapshot Standby
Introduced in 11g, snapshot standby allows the standby database to be opened in read-write mode. When switched back into standby mode, all changes made whilst in read-write mode are lost. This is achieved using flashback database, but the standby database does not need to have flashback database explicitly enabled to take advantage of this feature, thought it works just the same if it is.
If you are using RAC, turn off all but one of the RAC instances. Make sure the instance is in MOUNT mode.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
Make sure managed recovery is disabled.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Convert the standby to a snapshot standby. The following example queries the V$DATABASE view to show that flashback database is not enabled prior to the conversion operation.
SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
NO

ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;
ALTER DATABASE OPEN;
SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
RESTORE POINT ONLY

SQL>
You can now do treat the standby like any read-write database.
To convert it back to the physical standby, losing all the changes made since the conversion to snapshot standby, issue the following commands.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
SHUTDOWN IMMEDIATE;
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
NO

SQL>
The standby is once again in managed recovery and archivelog shipping is resumed. Notice that flashback database is still not enabled.
For more information see:

Oracle Database File System (DBFS) in Oracle Database 11g Release 2

    Oracle Database File System (DBFS) in Oracle Database 11g Release 2
    Oracle has quite a long history with database file systems. The Oracle Internet File System (iFS) was released in the Oracle 8i days. This product was later renamed to Oracle Content Management SDK. The introduction of XML DB in Oracle 9i Release 2 brought with it a database file system accessible from HTTP, FTP and WebDAV for the storage of XML files. Now Oracle 11g Release 2 introduces DBFS, the Oracle Database File System.
    DBFS creates a file system interface on top of database tables that store files as SecureFile LOBs. External access to the file system is via a client program (dbfs_client), which is only available for Linux and Solaris platforms. The client is already installed on the database server, but can also be installed on client machines, giving them access to the centralized database file system. On Linux platforms the dbfs_client can be used to mount the database file system on a regular mount point. This done using the "Filesystem in Userspace" (FUSE) project. This allows Linux machines to access DBFS like any other physical file system.
    In this article I'll show the steps necessary to mount the DBFS on a Linux server. I'm not going to discuss the DBFS Content API, DBFS SecureFile Store or DBFS Hierarchical Store directly, although some of these are called by scripts used in this article.
    Related articles.
    Creating a File System
    Create a tablespace to hold the file system.
    CONN / AS SYSDBA

    CREATE TABLESPACE dbfs_ts
      DATAFILE '/u01/app/oracle/oradata/DB11G/dbfs01.dbf'
      SIZE 1M AUTOEXTEND ON NEXT 1M;
    Create a user, grant DBFS_ROLE to the user and make sure it has a quota on the tablespace. Trying to create a file system from the SYS user fails, so it must be done via another user.
    CONN / AS SYSDBA

    CREATE USER dbfs_user IDENTIFIED BY dbfs_user
      DEFAULT TABLESPACE dbfs_ts QUOTA UNLIMITED ON dbfs_ts;

    GRANT CREATE SESSION, RESOURCE, CREATE VIEW, DBFS_ROLE TO dbfs_user;
    Create the file system in tablespace by running the "dbfs_create_filesystem.sql" script as the test user. The script accepts two parameters identifying the tablespace and file system name.
    cd $ORACLE_HOME/rdbms/admin
    sqlplus dbfs_user/dbfs_user

    SQL> @dbfs_create_filesystem.sql dbfs_ts staging_area
    The script created a partitioned file system. Although Oracle consider this the best option from a performance and scalability perspective, it can have two drawbacks:
    • Space cannot be shared between partitions. If the size of the files is small compared to the total file system size this is not a problem, but if individual files form a large proportion of the total file system size, then ENOSPC errors may be produced.
    • File rename operations may require the file to be rewritten, which can be problematic for large files.
    If these issues present a problem to you, you can create non-partitioned file systems using the "dbfs_create_filesystem_advanced.sql" script. In fact, the "dbfs_create_filesystem_advanced.sql" script is called by the "dbfs_create_filesystem.sql" script, which defaults many of the advanced parameters.
    If we later wish to drop a file system, this can be done using the "dbfs_drop_filesystem.sql" script with the file system name.
    cd $ORACLE_HOME/rdbms/admin
    sqlplus dbfs_user/dbfs_user

    SQL> @dbfs_drop_filesystem.sql staging_area
    FUSE Installation
    In order to mount the DBFS we need to install the "Filesystem in Userspace" (FUSE) software. If you are not planning to mount the DBFS or you are running on an Non-Linux platform, this section is unnecessary. The FUSE software can be installed manually, from the OEL media or via Oracle's public yum server. If possible, use the Yum installation.
    Yum FUSE Installation
    Configure the server to point to Oracle's public yum repository. The instructions for this are available at "http://public-yumn.oracle.com".
    Next, install the kernel developent package. It may already be present, in which case you will see a "Nothing to do" message.
    # yum install kernel-devel
    Finally, install the FUSE software.
    # yum install fuse fuse-libs
    Media FUSE Installation
    If you can't use the Yum installation method you can install FUSE from your OEL media.
    First, check to see if the "kernel-devel" package is installed.
    rpm -q kernel-devel
    kernel-devel-2.6.18-128.el5
    #
    If not, then install it from your media.
    # cd /media/cdrom/Server
    # rpm -Uvh kernel-devel*
    Then install the FUSE software from your media.
    # cd /media/cdrom/Server
    # rpm -Uvh fuse-2* fuse-libs-2*
    Manual FUSE Installation
    If you can't perform a Yum or media installation of FUSE, you can always install it manually. This assumes you have installed the kernel development package using one of the previous methods.
    Download the FUSE 2.7.3 package from http://fuse.sourceforge.net/.
    Determine the kernel directory.
    # echo /usr/src/kernels/`uname -r`-`uname -p`
    /usr/src/kernels/2.6.18-128.el5-x86_64
    #
    Install the FUSE package as the "root" user using the following commands, substituting your kernel directory.
    # tar -xzvf fuse-2.7.3.tar.gz
    # cd fuse-2.7.3
    # ./configure --prefix=/usr --with-kernel=/usr/src/kernels/2.6.18-128.el5-x86_64
    # make
    # make install
    # /sbin/depmod
    # /sbin/modprobe fuse
    # chmod 666 /dev/fuse
    # echo "/sbin/modprobe fuse" >> /etc/rc.modules
    # chmod 700 /etc/rc.modules
    Mounting a File System
    The dbfs_client tool is used to mount file systems on Linux servers. The usage is displayed if you call it without any parameters.
    $ dbfs_client
    usage: dbfs_client <db_user>@<db_server> [options] <mountpoint>
      db_user:              Name of Database user that owns DBFS content repository filesystem(s)
      db_server:            A valid connect string for Oracle database server
                            (for example, hrdb_host:1521/hrservice)
      mountpoint:           Path to mount Database File System(s)
                            All the file systems owned by the database user will be seen at the mountpoint.
    DBFS options:
      -o direct_io          Bypass the Linux page cache. Gives much better performance for large files.
                            Programs in the file system cannot be executed with this option.
                            This option is recommended when DBFS is used as an ETL staging area.
      -o wallet             Run dbfs_client in background.
                            Wallet must be configured to get credentials.
      -o failover           dbfs_client fails over to surviving database instance with no data loss.
                            Some performance cost on writes, especially for small files.
      -o allow_root         Allows root access to the filesystem.
                            This option requires setting 'user_allow_other' parameter in '/etc/fuse.conf'.
      -o allow_other        Allows other users access to the file system.
                            This option requires setting 'user_allow_other' parameter in '/etc/fuse.conf'.
      -o rw                 Mount the filesystem read-write. [Default]
      -o ro                 Mount the filesystem read-only. Files cannot be modified.
      -o trace_file=STR     Tracing <filename> | 'syslog'
      -o trace_level=N      Trace Level: 1->DEBUG, 2->INFO, 3->WARNING, 4->ERROR, 5->CRITICAL [Default: 4]
      -h                    help
      -V                    version
    [oracle@oel5-11gr2 admin]$
    First we need to create a mount point with the necessary privileges as the "root" user.
    # mkdir /mnt/dbfs
    # chown oracle:oinstall /mnt/dbfs
    Next, add a new library path.
    # echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf
    Create symbolic links to the necessary libraries in the directory pointed to by the new library path. Note. Depending on your installation the "libfuse.so.2" library may be in an alternative location.
    # export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
    # ln -s $ORACLE_HOME/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
    # ln -s $ORACLE_HOME/lib/libnnz11.so /usr/local/lib/libnnz11.so
    # ln -s /lib64/libfuse.so.2 /usr/local/lib/libfuse.so.2
    Issue the following command.
    # ldconfig
    The file system we've just created is mounted with the one of the following commands from the "oracle" OS user.
    $ # Connection prompts for password and holds session.
    $ dbfs_client dbfs_user@DB11G /mnt/dbfs

    $ # Connection retrieves password from file and releases session.
    $ nohup dbfs_client dbfs_user@DB11G /mnt/dbfs  < passwordfile.f &

    $ # Connection authenticates using wallet and releases session.
    $ nohup dbfs_client -o wallet /@DB11G_DBFS_USER /mnt/dbfs &
    The wallet authentication is the safest method as the others potentially expose the credentials. Creation of a wallet is discussed later, but it is part of the Advanced Security option.
    Once mounted, the "staging_area" file system is now available for use.
    # ls -al /mnt/dbfs
    total 8
    drwxr-xr-x 3 root root    0 Jan  6 17:02 .
    drwxr-xr-x 3 root root 4096 Jan  6 14:18 ..
    drwxrwxrwx 3 root root    0 Jan  6 16:37 staging_area
    # ls -al /mnt/dbfs/staging_area
    total 0
    drwxrwxrwx 3 root   root     0 Jan  6 16:37 .
    drwxr-xr-x 3 root   root     0 Jan  6 17:02 ..
    drwxr-xr-x 7 root   root     0 Jan  6 14:00 .sfs
    #
    To unmount the file system issue the following command from the "root" OS user.
    # fusermount -u /mnt/dbfs
    Wallet Creation
    To create a wallet issue the following commands as the "oracle" OS user.
    $ mkdir -p $HOME/oracle/wallet
    $ $ORACLE_HOME/bin/mkstore -wrl $HOME/oracle/wallet -create
    Add the following lines to the "$ORACLE_HOME/network/admin/sqlnet.ora" file.
    WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = $HOME/oracle/wallet) ) )
    SQLNET.WALLET_OVERRIDE = TRUE
    Add the credentials to the wallet for a specific connect string. This could be the existing connect string (DB11G in mycase), but in this example I've used something a little more specific (DB11G_DBFS_USER).
    $ mkstore -wrl $HOME/oracle/wallet -createCredential DB11G_DBFS_USER dbfs_user dbfs_user
    Make sure the connect string is present in the "$ORACLE_HOME/network/admin/tnsnames.ora" file. In this case I just copied my existing connect string and gave it the new name.
    DB11G_DBFS_USER =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = oel5-11gr2.localdomain)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = DB11G.WORLD)
        )
      )
    The file system can now be mounted using the wallet.
    $ nohup dbfs_client -o wallet /@DB11G_DBFS_USER /mnt/dbfs &
    Using /etc/fstab
    Apart from having a familiar syntax, the use of the "/etc/fstab" file for mounting seems a little pointless since FUSE does not support automount, so you will still have to manually mount the file systems after a reboot. Actually this is fortunate as the automount would probably happen before the database was up, which would of course would be an issue.
    Note: When this article was first written, the documentation for this step was not complete and the method proposed by Oracle did not work. The documentation has since been altered, but unfortunately the latest proposed method seems to work only when mounted from the "root" user (not recommended), rather than the "oracle" user. As a result I have switched to using the method proposed by Frits Hoogland in the article here. It works fine from the "oracle" user, but it I can't get it to work with the Wallet. Personally, I prefer to use the mount commands listed in the Mounting a File System section.
    To allow mount operations from the "/etc/fstab", issue the following commands as the "root" user.
    # groupadd fuse
    # usermod -a -G fuse oracle
    Add the following line into the "/etc/fstab" file.
    /sbin/mount.dbfs#dbfs_user@DB11G_DBFS_USER /mnt/dbfs fuse rw,user,noauto,direct_io 0 0
    Create the "/sbin/mount.dbfs" file with the following contents.
    #!/bin/bash
    export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
    nohup $ORACLE_HOME/bin/dbfs_client $@ << FEEDPWD &
    dbfs_user
    FEEDPWD
    Set the permissions using the following commands.
    # chmod 750 /sbin/mount.dbfs
    # chgrp fuse /sbin/mount.dbfs
    The filesystem should now be mounted using the following command from the "oracle" OS user.
    $ mount /mnt/dbfs
    As before, the filesystem is unmounted using the following command.
    $ fusermount -u /mnt/dbfs
    dbfs_client Command Interface
    As only Linux clients will be able to mount the file system, the majority of clients will access the file system via the dbfs_client command line, which provides a variety of UNIX-like file system command using the following syntax.
    dbfs_client <db_user>@<db_serverr> --command <command> [switches] [arguments]
    Commands include some typical UNIX file system commands like ls, cp, rm, mkdir. Arguments are typically file or directory names. The available switches are -a (all files), -l (long format) and -R (recursive).
    Paths within the DBFS are referenced using the "dbfs:/file-system-name/" prefix. If this is not present, the path is assumed to be on the local filesystem of the machine running the client software.
    Here are some examples of commands using the dbfs_client with the file system and wallet created earlier.
    $ # Directory listing.
    $ dbfs_client /@DB11G_DBFS_USER --command ls -a -l dbfs:/staging_area/
    drwxr-xr-x              root            root               0    Jan 06 19:32    .sfs
    $

    $ # Make a directory and get directory listing.
    $ dbfs_client /@DB11G_DBFS_USER --command mkdir dbfs:/staging_area/test_dir
    $
    $ dbfs_client /@DB11G_DBFS_USER --command ls -a -l dbfs:/staging_area/
    drwx------            oracle        oinstall               0    Jan 07 11:41    dbfs:/staging_area/test_dir
    drwxr-xr-x              root            root               0    Jan 06 19:32    .sfs
    $

    $ # Copy files to and from DBFS
    $ dbfs_client /@DB11G_DBFS_USER --command cp /tmp/test.txt dbfs:/staging_area/test_dir/
    /tmp/test.txt -> dbfs:/staging_area/test_dir/test.txt
    $
    $ dbfs_client /@DB11G_DBFS_USER --command cp dbfs:/staging_area/test_dir/test.txt /tmp/test_copy.txt
    dbfs:/staging_area/test_dir/test.txt -> /tmp/test_copy.txt
    $

    $ # Remove file
    $ dbfs_client /@DB11G_DBFS_USER --command rm dbfs:/staging_area/test_dir/test.txt
    Unlinking file dbfs:/staging_area/test_dir/test.txt
    $
    The command list is quite limited at the moment, but the message returned from unsupported commands suggests more are on their way.
    $ dbfs_client /@DB11G_DBFS_USER --command rmdir dbfs:/staging_area/test_dir
    rmdir is not supported right now
    $
    Switches must applied separately (-a -l), not grouped (-al) or the results are unpredictable. Also, wildcards do not work.
    For more information see:

Configuring DBFS on Oracle Database Machine (Doc ID 1054431.1)

APPLIES TO:
Oracle Exadata Hardware - Version 11.2.0.1 and later
Oracle Database - Enterprise Edition - Version 11.2.0.0 and later
SPARC SuperCluster T4-4 - Version All Versions and later
Information in this document applies to any platform.
GOAL
This document describes the steps needed to configure Oracle Database Filesystem (DBFS) on Oracle Database Machine. For platforms other than Oracle Database Machine, additional preparation steps may be required. The steps in this document apply to Oracle Database Machines running 11.2 software.
FIX
Note: This procedure applies to Linux and Solaris 11 database servers and has been verified on both Exadata Database Machine and Sun SuperCluster.

Configuring DBFS on Oracle Database Machine
This article guides the configuration of Oracle Database Filesystem (DBFS) on Oracle Database Machine. Most of the steps below are one-time setup steps, except where noted otherwise. On platforms other than Oracle Database Machine, additional setup steps may be required to install the required fuse RPM packages which are installed by default on Oracle Database Machine database servers. 

Those using DBFS should review Note 1150157.1 for recommended patches.
For nodes running Solaris, the following items should be reviewed before following the steps in this note.
  • Solaris 11 SRU 7 (Patch 14050126) or higher is required for Solaris to support DBFS mounting with fuse. Solaris 11 Express hosts must be upgraded to Solaris 11 (see note below).
  • Review Note 1021281.1 to learn about Solaris support repositories which may be needed to apply SRU updates to Solaris machines as prerequisites to configuring DBFS on Solaris database servers. 
  • For systems running Solaris 11 Express, follow MOS note 1431284.1 to upgrade to Solaris 11. After upgrading to Solaris 11, apply Solaris 11 SRU 7 or later.
Note: All references to ORACLE_HOME in this procedure are to the RDBMS ORACLE_HOME directory (usually /u01/app/oracle/product/11.2.0/dbhome_1) except where specifically noted. All references to GI_HOME should be replaced with the ORACLE_HOME directory for the Grid Infrastructure (GI). 

By convention, the dollar sign ($) prompt signifies a command run as the oracle user (or Oracle software owner account) and the hash (#) prompt signifies a command that is run as root. This is further clarified by prefixing the $ or # with (oracle)$ or (root)#.
  1. If running Solaris 11, the system must be running Solaris 11 SRU 07 or later. Additionally, the libfuse package must be installed. Presence of the libfuse package can be verified with "pkg list libfuse" (should return one line).
    • To verify the SRU currently on the system, as root run: "pkg info entire | grep SRU" and you'll see a reference to the SRU in the output. The delivered SRU version based on the Exadata release may be found in 888828.1. If the system is running SRU 06 or earlier, it will require an update before installing the libfuse package. If the system is running SRU 07 or later, skip to the next step to install libfuse.
    • After reviewing note 1021281.1 to configure repository access, run: pkg update
    • The system will apply the latest package updates and create a new boot environment and set it as the default. To confirm, run: beadm list. You should see a "R" shown next to the boot environment that will be active upon reboot. The "N" will show the boot environment that is active now. At this stage, these two letters should be on different lines until you reboot the system.
    • Reboot the server to have it boot to the updated SRU environment.
  1. If running Solaris 11, ensure that the libfuse package is installed by running "pkg info libfuse" at the prompt. If no rows or an error are returned, then follow the steps below to install libfuse.
    • After reviewing note 1021281.1 to configure repository access, run this command to install libfuse: pkg install libfuse
    • Confirm that it installed by running: pkg verify libfuse
    • The pkg verify command should have no output if successful.
  1. In the procedures listed in this note, both Solaris and Linux database servers are assumed to have user equivalence for root and the DBFS respository database (typically "oracle") users. Each of those users is assumed to have a dbs_group file in their $HOME directory that contains a list of cluster hostnames. The dcli utility is assumed to be available on both Solaris and Linux database nodes.
  2. When non-root commands are shown, it is assumed that proper environment variables for ORACLE_SID and ORACLE_HOME have been set and that the PATH has been modified to include $ORACLE_HOME/bin. These things may be done automatically by the oraenv script on Linux or Solaris systems.
  3. For Linux database servers, there are several steps to perform as root. Solaris database servers do not require this step and can skip it. First, add the oracle user to the fuse group on Linux.  Run these commands as the root user.
    (root)# dcli -g ~/dbs_group -l root usermod -a -G fuse oracle
    Create the /etc/fuse.conf file with the user_allow_other option. Ensure proper privileges are applied to this file.
    (root)# dcli -g ~/dbs_group -l root "echo user_allow_other > /etc/fuse.conf"
    (root)# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf
  4. For Solaris database servers, to enable easier debugging and troubleshooting, it is suggested to add a line to the /etc/user_attr file to give the oracle user the ability to mount filesystems directly. As root, run this on a database server:
    (root)# dcli -g ~/dbs_group -l root "echo 'oracle::::type=normal;project=group.oinstall;defaultpriv=basic,priv_sys_mount' >> /etc/user_attr"
    After running this, logout of your oracle session and login again to enable the additional privileges.
  5. For all database servers, create an empty directory that will be used as the mount point for the DBFS filesystem.
    (root)# dcli -g ~/dbs_group -l root mkdir /dbfs_direct
    Change ownership on the mount point directory so oracle can access it.
    (root)# dcli -g ~/dbs_group -l root chown oracle:dba /dbfs_direct
  6. For Solaris database hosts, it is required to employ a workaround for bug 12832052. This requires an edit to the <GI_HOME>/bin/ohasd script to be made on each database server locally. First, make a copy of the current ohasd script as root:

    (root)# dcli -g ~/dbs_group -l root cp -p /u01/app/11.2.0/grid/bin/ohasd /u01/app/11.2.0/grid/bin/ohasd.pre12832052
    Then edit the script locally on each node (do not copy the file from one node to another) and change the original lines (at around line 231) from this:

    $LOGMSG "exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE  $ORASYM \"$@\""
    exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM "$@"

     To add a line before the existing ones as shown (the line starting with ppriv is added) so that the resulting section looks like this:

    ppriv -s I+sys_mount $$
    $LOGMSG "exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM \"$@\""
    exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM "$@"

    Note that this workaround will be required after each bundle patch installation on the GI_HOME until bug 12832052 is fixed and included in the bundle patch.
  7. To pick up the additional group (fuse) membership for the oracle user on Linux or the workaround above on Solaris, Clusterware must be restarted. For example, to restart Clusterware on all nodes at the same time (non-rolling), you can use the following commands as root:

    (root)# dcli -g ~/dbs_group -l root /u01/app/11.2.0/grid/bin/crsctl stop crs
    (root)# dcli -g ~/dbs_group -l root /u01/app/11.2.0/grid/bin/crsctl start crs

    Note that the "crsctl stop cluster -all" syntax may not be used as it leaves ohasd running and Solaris database hosts require it to be restarted for the workaround to take effect.
  8. Create a database to hold the DBFS repository. Follow Note 1191144.1 to create the DBFS repository database.
  9. As the RDBMS software owner, create the DBFS repository inside the repository database. To create the repository, create a new tablespace to hold the DBFS objects and a database user that will own the objects.
    • Use sqlplus to login to the DBFS repository database as a DBA user (i.e. SYS or SYSTEM).
    • In the following create tablespace statement, use the any disk group (this example shows DBFS_DG, but any diskgroup with sufficient space available can be used) and size appropriately for the intended initial capacity. Autoextend can be used, as long as the initial size can accommodate the repository without requiring autoextension. The following example statements create a tablespace of 32GB with autoextend on, allocating additional 8GB to the tablespace as needed. You should size your tablespace according to your expected DBFS utilization. A bigfile tablespace is used in this example for convenience, but smallfile tablespaces may be used as well.

      SQL> create bigfile tablespace dbfsts datafile '+DBFS_DG' size 32g autoextend on next 8g maxsize 300g NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE  SEGMENT SPACE MANAGEMENT AUTO ;

      SQL> create user dbfs_user identified by dbfs_passwd default tablespace dbfsts quota unlimited on dbfsts;

      SQL> grant create session, create table, create view, create procedure, dbfs_role to dbfs_user;
  1. With the user created and privileges granted, create the database objects that will hold DBFS.

    (oracle)$ cd $ORACLE_HOME/rdbms/admin

    (oracle)$ sqlplus dbfs_user/dbfs_passwd

    SQL> start dbfs_create_filesystem dbfsts FS1
    This script takes two arguments:
  1. Perform the one-time setup steps for mounting the filesystem. The mount-dbfs.sh script attached to this note provides the logic and necessary scripting to mount DBFS as a cluster resource. The one-time setup steps required for each of the two mount methods (dbfs_client or mount) are outlined below. There are two options for mounting the DBFS filesystem and each will result in the filesystem being available at /dbfs_direct. Choose one of the two options.
    1. The first option is to utilize the dbfs_client command directly, without using an Oracle Wallet. There are no additional setup steps required to use this option.
    2. The second option is to use the Oracle Wallet to store the password and make use of the mount command. The wallet directory ($HOME/dbfs/wallet in the example here) may be any oracle-writable directory (creating a new, empty directory is recommended). All commands in this section should be run by the oracle user unless otherwise noted.
      1. On Linux DB nodes, set the library path on all nodes using the commands that follow (substitute proper RDBMS ORACLE_HOMEs):
        (root)# dcli -g dbs_group -l root mkdir -p /usr/local/lib
        (root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0/dbhome_1/lib/libnnz11.so /usr/local/lib/libnnz11.so
        (root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
        (root)# dcli -g dbs_group -l root ln -s /lib64/libfuse.so.2 /usr/local/lib/libfuse.so.2
        (root)# dcli -g dbs_group -l root 'echo /usr/local/lib >> /etc/ld.so.conf.d/usr_local_lib.conf'
        (root)# dcli -g dbs_group -l root ldconfig
      2. Create a new TNS_ADMIN directory ($HOME/dbfs/tnsadmin) for exclusive use by the DBFS mount script.

        (oracle)$ dcli -g dbs_group -l oracle mkdir -p $HOME/dbfs/tnsadmin
      3. Create the $HOME/dbfs/tnsadmin/tnsnames.ora file with the following contents on the first node. This example presumes that the name of the DBFS repository database is fsdb and the instance on the first node is named fsdb1. If your RDBMS ORACLE_HOME is not /u01/app/oracle/product/11.2.0/dbhome_1, then change the PROGRAM and ENVS settings accordingly).

        fsdb.local =
          (DESCRIPTION =
              (ADDRESS =
                (PROTOCOL=BEQ)
                (PROGRAM=/u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle)
                (ARGV0=oraclefsdb1)
                (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
                (ENVS='ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1,ORACLE_SID=fsdb1')
              )
          (CONNECT_DATA=(SID=fsdb1))
        )
      4. On other nodes, create similar entries (all using the name "fsdb.local") and change all occurrences of fsdb1 to the appropriate instance name to match the instance name running on the node where that tnsnames.ora file resides. The tnsnames.ora file on each node will be slightly different so that each tnsnames.ora file references the instance running locally on that node.
      5. On each node, create the $HOME/dbfs/tnsadmin/sqlnet.ora file with the same contents on each node after making the proper substitution for <HOMEDIR_PATH_HERE>:

        WALLET_LOCATION =
          (SOURCE=(METHOD=FILE)
                  (METHOD_DATA=(DIRECTORY=<HOMEDIR_PATH_HERE>/dbfs/wallet))
          )
        SQLNET.WALLET_OVERRIDE = TRUE


        Ensure you substitute the correct path for the DIRECTORY attribute. You may not use variables in this path - it must be a literal full path.

        Copy the file to all nodes using dcli:
        (oracle)$ dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/tnsadmin -f $HOME/dbfs/tnsadmin/sqlnet.ora
      6. Create a wallet directory on one database server as the oracle user. For example:

        (oracle)$ mkdir -p $HOME/dbfs/wallet
      7. Create an empty auto-login wallet:

        (oracle)$ mkstore -wrl $HOME/dbfs/wallet -create
      8. Add the necessary credentials to the wallet. The credentials can be specific for the connect string used as shown here:

        (oracle)$ mkstore -wrl $HOME/dbfs/wallet -createCredential fsdb.local dbfs_user dbfs_passwd
      9. Copy the wallet files to all database nodes.

        (oracle)$ dcli -g ~/dbs_group -l oracle mkdir -p $HOME/dbfs/wallet
        (oracle)$ dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f $HOME/dbfs/wallet/ewallet.p12
        (oracle)$ dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f $HOME/dbfs/wallet/cwallet.sso
      10. Ensure that the TNS entry specified above (fsdb.local in the example) is defined and working properly (checking with "TNS_ADMIN=/home/oracle/dbfs/tnsadmin tnsping fsdb.local" is a good test).

        (oracle)$ dcli -g ~/dbs_group -l oracle "export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1; TNS_ADMIN=$HOME/dbfs/tnsadmin /u01/app/oracle/product/11.2.0/dbhome_1/bin/tnsping fsdb.local | grep OK"
        dm01db01: OK (20 msec)
        dm01db02: OK (30 msec)
  1. Download the mount-dbfs.sh script attached to this note and place it on one database server in a temporary location (like /tmp/mount-dbfs.sh). To ensure that the file transfer didn't modify the script contents, run dos2unix against it on the database server:
    For Linux, run this:
    (root)# dos2unix /tmp/mount-dbfs.sh
    For Solaris: run these:
    (root)# dos2unix /tmp/mount-dbfs.sh /tmp/mount-dbfs.sh.new
    (root)# mv /tmp/mount-dbfs.sh.new /tmp/mount-dbfs.sh
  2. Edit the variable settings in the top of the script for your environment. Edit or confirm the settings for the following variables in the script. Comments in the script will help you to confirm the values for these variables.
    • DBNAME
    • MOUNT_POINT
    • DBFS_USER
    • ORACLE_HOME (should be the RDBMS ORACLE_HOME directory)
    • LOGGER_FACILITY (used by syslog to log the messages/output from this script)
    • MOUNT_OPTIONS
    • DBFS_PASSWD (used only if WALLET=false)
    • DBFS_PWDFILE_BASE (used only if WALET=false)
    • WALLET (must be true or false)
    • TNS_ADMIN (used only if WALLET=true)
    • DBFS_LOCAL_TNSALIAS
  1. After editing, copy the script (rename it if desired or needed) to the proper directory (GI_HOME/crs/script) on database nodes and set proper permissions on it, as the root user:

    (root)# dcli -g ~/dbs_group -l root -d /u01/app/11.2.0/grid/crs/script -f /tmp/mount-dbfs.sh
    (root)# dcli -g ~/dbs_group -l root chown oracle:dba /u01/app/11.2.0/grid/crs/script/mount-dbfs.sh
    (root)# dcli -g ~/dbs_group -l root chmod 750 /u01/app/11.2.0/grid/crs/script/mount-dbfs.sh
  2. With the appropriate preparation steps for one of the two mount methods complete, the Clusterware resource for DBFS mounting can now be registered. Register the Clusterware resource by executing the following as the RDBMS owner of the DBFS repository database (typically "oracle") user. The ORACLE_HOME and DBNAME should reference your Grid Infrastructure ORACLE_HOME directory and your DBFS repository database name, respectively. If mounting multiple filesystems, you may also need to modify the ACTION_SCRIPT and RESNAME. For more information, see section below regarding Creating and Mounting Multiple DBFS Filesystems. Create this short script and run it as the RDBMS owner (typically "oracle") on only one database server in your cluster.

    ##### start script add-dbfs-resource.sh
    #!/bin/bash
    ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/mount-dbfs.sh
    RESNAME=dbfs_mount
    DBNAME=fsdb
    DBNAMEL=`echo $DBNAME | tr A-Z a-z`
    ORACLE_HOME=/u01/app/11.2.0/grid
    PATH=$ORACLE_HOME/bin:$PATH
    export PATH ORACLE_HOME
    crsctl add resource $RESNAME \
      -type local_resource \
      -attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
             CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
             START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
             STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
             SCRIPT_TIMEOUT=300"
    ##### end script add-dbfs-resource.sh
    Then run this as the Grid Infrastructure owner (typically oracle) on one database server only:
    (oracle)$ sh ./add-dbfs-resource.sh

    When successful, this command has no output.

    It is not necessary to restart the database resource at this point, however, you should review the following note regarding restarting the database now that the dependencies have been added.

    Note: After creating the $RESNAME resource, in order to stop the $DBNAME database when the $RESNAME resource is ONLINE, you will have to specify the force flag when using srvctl. For example: "srvctl stop database -d fsdb -f". If you do not specify the -f flag, you will receive an error like this:

    (oracle)$ srvctl stop database -d fsdb
    PRCD-1124 : Failed to stop database fsdb and its services
    PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME ENDS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STATE != OFFLINE)))
    CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or relocating 'dbfs_mount', but the force option was not specified

    Using the -f flag allows a successful shutdown and results in no output.

    Also note that once the $RESNAME resource is started and then the database it depends on is shut down as shown above (with the -f flag), the database will remain down. However, if Clusterware is then stopped and started, because the $RESNAME resource is still has a target state of ONLINE, it will cause the database to be started automatically when normally it would have remained down. To remedy this, ensure that $RESNAME is taken offline (crsctl stop resource $RESNAME) at the same time the DBFS database is shutdown.
Managing DBFS mounting via Oracle Clusterware
  1. After the resource is created, you should be able to see the dbfs_mount resource by running crsctl stat res dbfs_mount and it should show OFFLINE on all nodes. For example:

    (oracle)$ <GI_HOME>/bin/crsctl stat res dbfs_mount -t
    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS     
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    dbfs_mount
                   OFFLINE OFFLINE      dscbac05                                   
                   OFFLINE OFFLINE      dscbac06                                    
  2. To bring dbfs_mount online which will mount the filesystem on all nodes, run crsctl start resource dbfs_mount from any cluster node. This will mount DBFS on all nodes. For example:

    (oracle)$ <GI_HOME>/bin/crsctl start resource dbfs_mount
    CRS-2672: Attempting to start 'dbfs_mount' on 'dscbac05'
    CRS-2672: Attempting to start 'dbfs_mount' on 'dscbac06'
    CRS-2676: Start of 'dbfs_mount' on 'dscbac06' succeeded
    CRS-2676: Start of 'dbfs_mount' on 'dscbac05' succeeded
    (oracle)$ <GI_HOME>/bin/crsctl stat res dbfs_mount -t
    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS     
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    dbfs_mount
                   ONLINE  ONLINE       dscbac05                                   
                   ONLINE  ONLINE       dscbac06                                    
  3. Once the dbfs_mount Clusterware resource is online, you should be able to observe the mount point with df -h on each node. Also, the default startup for this resource is "restore" which means that if it is online before Clusterware is stopped, it will attempt to come online after Clusterware is restarted. For example:

    (oracle)$ df -h /dbfs_direct
    Filesystem            Size  Used Avail Use% Mounted on
    dbfs                  1.5M   40K  1.4M   3% /dbfs_direct
  4. To unmount DBFS on all nodes, run this as the oracle user:
    (oracle)$ <GI_HOME>/bin/crsctl stop res dbfs_mount
    Note the following regarding restarting the database now that the dependencies have been added between the dbfs_mount resource and the DBFS repository database resource.
    Note: After creating the dbfs_mount resource, in order to stop the DBFS repository database when the dbfs_mount resource is ONLINE, you will have to specify the force flag when using srvctl. For example: "
    srvctl stop database -d fsdb -f". If you do not specify the -f flag, you will receive an error like this:
    (oracle)$ srvctl stop database -d fsdb
    PRCD-1124 : Failed to stop database fsdb and its services
    PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME ENDS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STATE != OFFLINE)))
    CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or relocating 'dbfs_mount', but the force option was not specified
    Using the -f flag allows a successful shutdown and results in no output.
    Also note that once the dbfs_mount resource is started and then the database it depends on is shut down as shown above (with the -f flag), the database will remain down. However, if Clusterware is then stopped and started, because the dbfs_mount resource still has a target state of ONLINE, it will cause the database to be started automatically when normally it would have remained down. To remedy this, ensure that dbfs_mount is taken offline (crsctl stop resource dbfs_mount) at the same time the DBFS database is shutdown.
Steps to Perform If Grid Home or Database Home Changes
There are several cases where the ORACLE_HOMEs used in the management or mounting of DBFS may change. The most common case is when performing an out-of-place upgrade or doing out-of-place patching by cloning an ORACLE_HOME. When the Grid Infrastructure ORACLE_HOME or RDBMS ORACLE_HOME change, a few changes are required. The items that require changing are:
  • Modifications to the mount-dbfs.sh script. This is also a good time to consider updating to the latest version of the script attached to this note.
  • If using the wallet-based mount on Linux hosts, the shared libraries must be reset.
For example, if the new RDBMS ORACLE_HOME=/u01/app/oracle/product/11.2.0.2/dbhome_1 *AND* the wallet-based mounting method using /etc/fstab is chosen, then the following commands will be required as the root user. If the default method (using dbfs_client directly) is used, these steps may be skipped.
  1. (root)# dcli -l root -g ~/dbs_group rm -f /usr/local/lib/libnnz11.so /usr/local/lib/libclntsh.so.11.1
  2. (root)# dcli -l root -g ~/dbs_group "cd /usr/local/lib; ln -sf /u01/app/oracle/product/11.2.0.2/dbhome_1/lib/libnnz11.so"
  3. (root)# dcli -l root -g ~/dbs_group "cd /usr/local/lib; ln -sf /u01/app/oracle/product/11.2.0.2/dbhome_1/lib/libclntsh.so.11.1"
  4. (root)# dcli -l root -g ~/dbs_group ldconfig
  5. (root)# dcli -l root -g ~/dbs_group rm -f /sbin/mount.dbfs ### remove this, new deployments don't use it any longer
For all deployments, the mount-dbfs.sh script must be located in the new Grid Infrastructure ORACLE_HOME (<GI_HOME>/crs/script/mount-dbfs.sh). At times when the ORACLE_HOMEs change, the latest mount-dbfs.sh script should be downloaded from this note's attachments and deployed using the steps detailed earlier in this note steps 14-16. Since the custom resource is already registered, it does not need to be registered again. 

With the new script deployed into the correct location on the new ORACLE_HOME, the next step is to modify the cluster resource, to change the location of the mount-dbfs.sh script. Also, if not already configured, take the opportunity to change the RESTART_ATTEMPTS=10. Use these commands which should be run from any cluster node (replace <NEW_GI_HOME> with full path appropriately):
  1. (oracle)$ crsctl modify resource dbfs_mount -attr "ACTION_SCRIPT=<NEW_GI_HOME>/crs/script/mount-dbfs.sh"
  2. (oracle)$ crsctl modify resource dbfs_mount -attr "RESTART_ATTEMPTS=10"
After these changes are complete, verify that the status of the resources is still online. This concludes the changes required when the ORACLE_HOMEs change.
Removing DBFS configuration
The steps in this section will deconfigure the components configured by the steps above. The steps here will only deconfigure the parts that were configured by this procedure.
  1. Stop the dbfs_mount service in clusterware using the oracle account.

    (oracle)$ <GI_HOME>/bin/crsctl stop resource dbfs_mount
    CRS-2673: Attempting to stop 'dbfs_mount' on 'dadzab06'
    CRS-2673: Attempting to stop 'dbfs_mount' on 'dadzab05'
    CRS-2677: Stop of 'dbfs_mount' on 'dadzab05' succeeded
    CRS-2677: Stop of 'dbfs_mount' on 'dadzab06' succeeded
  2. Confirm that the resource is stopped and then remove the clusterware resource for dbfs_mount as the oracle (or Grid Infrastructure owner) user.

    (oracle)$ <GI_HOME>/bin/crsctl stat resource dbfs_mount -t
    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS     
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    dbfs_mount
                   OFFLINE OFFLINE      dadzab05                                   
                   OFFLINE OFFLINE      dadzab06                                   

    (oracle)$ <GI_HOME>/bin/crsctl delete resource dbfs_mount
  3. If a wallet was used, remove the /home/oracle/dbfs directory and subdirectories as the oracle user.

    (oracle)$ dcli -g dbs_group -l oracle rm -rf $HOME/dbfs
  4. Remove the custom action script that supported the resource and the /etc/fuse.conf file as the root user.

    (root)# dcli -g dbs_group -l root rm -f /u01/app/11.2.0/grid/crs/script/mount-dbfs.sh /etc/fuse.conf
  5. Remove the mount point directory as the root user.

    (root)# dcli -g dbs_group -l root rmdir /dbfs_direct
  6. On Linux servers only, modify the group memberships for the oracle user account. This assumes that you have not added any other group memberships for the oracle account (other than the defaults configured during deployment). The following is an example showing that the group memberships that remain do not include the fuse group. In this case, the oracle user was a member of oinstall, dbs, oper, and asmdba groups in addition to the fuse group. The group memberships for the oracle user may vary, so some modification to this example command may be required.

    (root)# dcli -g dbs_group -l root usermod -G oinstall,dba,oper,asmdba oracle
  7. On Linux servers only, if a wallet was used, follow these steps to remove wallet-specifc configuration changes:
    1. dcli -g ~/dbs_group -l root 'sed -i "/^\/sbin\/mount.dbfs/d" /etc/fstab
    2. dcli -g ~/dbs_group -l root rm -f /sbin/mount.dbfs
    3. dcli -g ~/dbs_group -l root 'cd /usr/local/lib; rm -f libclntsh.so.11.1 libfuse.so.2 libnnz11.so'
    4. dcli -g ~/dbs_group -l root 'sed -i "/^\/usr\/local\/lib$/d" /etc/ld.so.conf.d/usr_local_lib.conf'
    5. dcli -g ~/dbs_group -l root ldconfig
    6. dcli -g ~/dbs_group -l root 'sed -i "/^\/sbin\/mount.dbfs/d" /etc/fstab'
  1. On Solaris servers only, remove the line from /etc/user_attr that was added in this procedure by executing the follow command as root:
    (root)# dcli -g ~/dbs_group -l root 'sed "/^oracle::::/d" /etc/user_attr > /tmp/user_attr.new ; cp /tmp/user_attr.new /etc/user_attr ; rm -f /tmp/user_attr.new
  2. The DBFS repository objects remain. You may either:
    1. Delete the DBFS repository database using DBCA once the steps above are completed.
    2. Remove the DBFS repository by connecting to the database as the repository owner using SQL*Plus and running @?/rdbms/admin/dbfs_drop_filesystem <filesystem-name>. In the example in this note, the filesystem-name is FS1, so the command would be @?/rdbms/admin/dbfs_drop_filesystem FS1
      SQL> connect dbfs_user/dbfs_passwd
      SQL> @?/rdbms/admin/dbfs_drop_filesystem FS1
      SQL> connect / as sysdba
      SQL> drop user dbfs_user cascasde;
Creating and Mounting Multiple DBFS Filesystems
There are several ways to create additional DBFS filesystems. Some environments may wish to have more than one DBFS filesystem to support non-direct_io. DBFS filesystems may always hold shell script files or binary files, but if mounted with the direct_io option, the files on DBFS will not be executable. In such cases, a second DBFS filesystem may be used since it can be mounted without the direct_io option to support executable files or scripts. 

There is nothing "inside" the DBFS filesystem that makes it direct_io or non-direct. Instead, to change from one type of access to the other, the filesystem should be unmounted (using the CRS resource), mount options changed in the mount-dbfs.sh script on all nodes, and then the filesystem mounted again (using the CRS resource). 

Let's review some high-level points related to multiple DBFS filesystems.
  1. Create additional filesystems under same DBFS repository owner (database user)
    • The additional filesystems will show as sub-directories which are the filesystem names given during creation of the filesystem (second argument to the dbfs_create_filesystem_advanced script).
    • There is only one mount point for all filesystems created in this way.
    • Only one mount-dbfs.sh script needs to be configured.
    • All filesystems owned by the same DBFS repository owner will share the same mount options (i.e. direct_io).
  1. Create another DBFS repository owner (database user) and new filesystems under that owner.
    • Can be in the same database with other DBFS repository owners or in a completely separate database.
    • Completely separate: can use different tablespaces (which could be in different diskgroups), separate mount points, possibly different mount options (direct_io versus non-direct_io).
    • One DBFS filesystem has no impact on others in terms of administration or dbfs_client start/stop.
    • Requires a new mount point to be created and used.
    • Requires a second mount-dbfs.sh to be created and configured in Clusterware.
    • Also supports having completely separate ORACLE_HOMEs with possibly different software owner (Linux/Solaris) accounts managing the repository databases.
To configure option #1 above, follow these steps:
  1. It is recommended (but optional) to create a new tablespace for the new DBFS filesystem you are creating.
  2. Connect to the DBFS repository as the current owner (dbfs_user is the example owner used in this note) and then run the dbfs_filesystem_create_advanced script again using a different filesystem name (the 2nd argument). 
  3. The filesystem will appear as another subdirectory just below the chosen mount point.
To configure option #2 above, follow these steps:
  1. Optionally create a second DBFS repository database.
  2. Create a new tablespace and a DBFS repository owner account (database user) for the new DBFS filesystem as shown in step 4 above.
  3. Create the new filesystem using the procedure shown in step 5 above. substituting the proper values for the tablespace name and desired filesystem name. 
  4. If using a wallet, you must create a separate TNS_ADMIN directory and a separate wallet. Be sure to use the proper ORACLE_HOME, ORACLE_SID, username and password when setting up those components. 
  5. Ensure you use the latest mount-dbfs.sh script attached to this note. Updates were made on 7-Oct-2010 to support multiple filesystems. If you are using previous versions of this script, download the new version and after applying the necessary configuration modifications in it, replace your current version.
  6. To have Clusterware manage a second filesystem mount, use a second copy of the mount-dbfs.sh script. Rename it to a unique file name like mount-dbfs2.sh and place it in the proper directory as shown in step 16 above. Once mount-dbfs2.sh has been properly modified with proper configuration information, a second Clusterware resource (with a unique name) should be created. The procedure for this is outlined in step 17 above.
The remaining steps in this note (configuration, starting, stopping) relate to a single DBFS filesystem resource. If you create additional DBFS filesystem resources, you will need to start each of them individually at startup time (i.e. when the database is restarted). Starting or stopping one DBFS resource does not have any affect on other DBFS resources you have configured.
Troubleshooting Tips
When configuring DBFS, if the clusterware resource(s) do not mount the DBFS filesystem successfully, it may be useful to run the mount-dbfs.sh script directly from the command line on one node. You should run this as the RDBMS owner user like this (with specifying one of the 3 arguments shown):

<GI_HOME>/crs/script/mount-dbfs.sh [ start | stop | check ]
Your script may have a slightly different name, especially if you are deploying multiple filesystems. Often, running the script in this way will display errors that may otherwise not be reported by clusterware. 

Also, starting with the 28-Jan-2011 version, you will find messages related to DBFS in the /var/log/messages (on Linux) or /var/adm/messages (on Solaris) file tagged with the string DBFS_<mountpoint> for easy identification. 

If you encounter issues when mounting the DBFS filesystem, it may be required to umount the filesystem manually. To unmount file system, run "fusermount -u /dbfs_direct" (on Linux) on the node(s) having problems and then make sure that the dbfs_client (or mount.dbfs if using the second mounting option) is not running. When using "fusermount -u /dbfs_direct" to unmount the filesystem, if the client (dbfs_client or mount.dbfs) is still running, that process should be killed. On Solaris, use "umount /dbfs_direct" instead of "fusermount -u /dbfs_direct".

Normally, if no mounts are present, there should be an empty directory at /dbfs_direct (ls -l /dbfs_direct). When running "ls -l /dbfs_direct", if an error message like "Transport Endpoint is not connected" is observed, this may indicate that the DBFS client (dbfs_client) is no longer running, but fuse still has record of the mount. Often, this will be resolved by using "fusermount -u /dbfs_direct" and ensuring that the client process is no longer running before re-attempting the mount operation using one of the methods outlined in this note. 
Other items that can lead to "Transport Endpoint is not connected" error:
- the sqlnet.ora is missing
- the dbfs_user's password contained a dollar sign. So the password included in the wallet configuration command mkstore must be enclosed by single quotes.
- TNS_ADMIN must be defined prior to running the mkstore command because it needs to connect to the database via the SQL*Net connect string fsdb.local.

If attempting to umount the filesystem results in errors saying "device busy", then you may try using "fusermount -u -z /dbfs_direct" (on Linux) and then identify the dbfs_client and/or mount.dbfs programs that are still running and kill them. Then, you should be able to mount the filesystem again.
On Solaris hosts, if you want to inspect the arguments for dbfs_client to see what options it was invoked with, you will want to identify the process ID using "ps -ef|grep dbfs_client", but then you'll need to use the "pargs <PID>" command to see the complete options. The Solaris output for the ps command truncates the command line at 80 characters which is typically not enough to display all options.
If you receive an error saying "File system already present at specified mount point <mountpoint>" then ensure that the mount point directory is empty. If there are any contents in the mount point directory, this error will prevent the filesystem mount from succeeding. Seasoned system administrators will note that this behavior differs from typical filesystem mounts where mount point directories can have contents and those contents will be hidden while the mounted filesystem remains mounted. In other terms, it "overlays" the new mount. With fuse-mounted filesystems, the mount point directory must be empty prior to mounting the fuse (in this case, DBFS) filesystem. 
Community Discussions

The window below is a live discussion of this article (not a screenshot).  We encourage you to join the discussion by clicking the "Reply" link below for the entry you would like to provide feedback on.  If you have questions or implementation issues with the information in the article above, please share that below.