Benchmarking filesystem performance for Oracle using SLOB (Part 1.5): Improved test scripts

UPDATE 20140227: I am leaving this post here for historical reference, but the version of SLOB I used while writing on it is now fully deprecated.  Please go directly to the distribution page for SLOB2 and use the information there to retrieve the latest version of SLOB and learn how to use it, as the usage and functionality have changed.  For additional tips, consider Yury’s “SLOB2 Kick start“.  Please do not attempt to use the scripts, instructions or techniques I have described here unless you are still using the old version of SLOB.  If you are still using the old version, you should not be.

 

Minor update 20130207: Adjusted read-write testing script for three redo logs, not two.

In Part 1 I described a starter methodology for using SLOB to benchmark filesystem performance for Oracle. After many SLOB runs and much valuable advice from Kevin Closson, I now have an improved method. My next post will contain the results of these tests, with all numbers scaled against RAW performance for the sake of comparison. Since my hardware and network don’t match yours, my specific numbers aren’t very relevant; but for a DBA or SA considering changing the filesystem they use for Oracle without a priori knowledge of how to best tune it, this will hopefully present a useful baseline.

Previously I was bouncing the database, dropping and recreating the SLOB users before every test run. Now I am initializing the test batches for each filesystem/volume by bouncing the database, creating the users and executing a throwaway SLOB run to prime the database.

Test Scripts

Below are the scripts I am using to test read-only, redo generation, and read-write performance. If you use any of them, edit the scripts to reflect your base directory (the oracle user’s home), SLOB install directory and desired location for SLOB AWR output. Before running any of the scripts, prepare your environment by creating a tablespace on each filesystem/device type that you wish to test. Name the tablespace something that will make it clear to you which filesystem you are testing. I used a single 16GB datafile for each tablespace. Each script runs SLOB setup.sh to create the users and then SLOB runit.sh to execute the tests. After an initial throwaway run to warm things up, they will run SLOB three times each with 8, 16, and 32 sessions. You can adjust the session count by changing the for loops. At the end of each script the awr.txt file generated by SLOB is renamed and moved to the $SLOBAWR directory, using the filename convention expected by the awr_info.sh script (found in the latest SLOB release.) All tests are performed in NOARCHIVELOG mode, don’t do that on a database that you need. The scripts assume that you are executing them from within the SLOB directory.

Read-Only Performance

Before running the read-only test, I created two 32G redo logs on raw devices. The read-only test should not hit redo at all so where you place them doesn’t really matter. For read-only testing you should use a very small db_cache_size, I used 64M. The small cache will make sure that reads are fulfilled through physical I/O rather than cached.

#!/bin/sh
#
# Usage: ./testread.sh FS RUNCOUNT
#
# Script assumes a tiny SGA to force physical reads
# Script will create SLOB schema users in tablespace named $FS, and run each test $RUNCOUNT times

TYPE=$1
NUMRUNS=$2
BASEDIR=/oracle/SLOB
SLOBDIR=$BASEDIR/SLOB
DROPUSERS=$SLOBDIR/drop_users.sql
AWRDIR=$SLOBDIR/SLOBAWR
COUNTER=1

if [ -z "$TYPE" -o -z "$NUMRUNS" ]; then
    echo "Usage: $0 FS RUNCOUNT"
    exit 1
fi

mkdir $AWRDIR >& /dev/null

echo "Starting SLOB read-only performance testing for $TYPE ($NUMRUNS runs)"

echo "Dropping existing users and bouncing database"

sqlplus -s / as sysdba<<EOF
@$DROPUSERS;
shutdown immediate;
startup;
exit;
EOF

echo "Setting up SLOB user schemata"
$SLOBDIR/setup.sh $TYPE 32

echo "Performing N+1 logswitch"
sqlplus -s / as sysdba<<EOF2
alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
exit;
EOF2

echo "Throwaway SLOB run to prime database"
$SLOBDIR/runit.sh 0 8

for i in 8 16 32; do (
    while [ $COUNTER -le $NUMRUNS ]; do
        echo "Running SLOB for $i readers (run #$COUNTER)"

        $SLOBDIR/runit.sh 0 $i
        echo "Renaming AWR report"
        mv awr.txt $AWRDIR/SLOB-AWR-$TYPE-$COUNTER.0.$i
        COUNTER=$((COUNTER+1))
    done )
done

Redo Performance

Before running the redo generation test, I created two 32G redo logs the filesystem whose performance I wish to test. For redo testing you should use a large db_cache_size; I used 32G. I also increased log_buffer to reduce log buffer waits, but I have not been able to eliminate them entirely. SAP requires specific log_buffer settings so I don’t want to deviate from them too much, as I want this test to have at least some similarity to our production performance. Prior to each test run I performed N+1 log switches (where N is the number of configured redo logs) to clear out any pending redo.

One possible improvement others may wish to consider when testing redo generation performance would be to configure your UNDO tablespace to use the same filesystem type as your redo logs. I did not do so, so my max performance is somewhat constrained by writes to UNDO. Each test should be similarly constrained so I am not concerned about that at the moment.

#!/bin/sh
#
# Usage: ./testredo.sh FS RUNCOUNT
#
# Script assumes you have pre-created exactly two redo logs on the filesystem/device to be tested
# Script will create SLOB schema users in tablespace named $FS, and run each test $RUNCOUNT times

TYPE=$1
NUMRUNS=$2
BASEDIR=/oracle/SLOB
SLOBDIR=$BASEDIR/SLOB
DROPUSERS=$SLOBDIR/drop_users.sql
AWRDIR=$SLOBDIR/SLOBAWR
COUNTER=1

if [ -z "$TYPE" -o -z "$NUMRUNS" ]; then
    echo "Usage: $0 FS RUNCOUNT"
    exit 1
fi

mkdir $AWRDIR >& /dev/null

echo "Starting SLOB redo generation performance testing for $TYPE ($NUMRUNS runs)"
echo "Assuming two redo logs exist"

echo "Dropping existing users and bouncing database"

sqlplus -s / as sysdba<<EOF
@$DROPUSERS;
shutdown immediate;
startup;
exit;
EOF

echo "Setting up SLOB user schemata"
$SLOBDIR/setup.sh $TYPE 32

echo "Throwaway SLOB run to prime database"
$SLOBDIR/runit.sh 8 0

for i in 8 16 32; do (
    while [ $COUNTER -le $NUMRUNS ]; do
        echo "Running SLOB for $i writers (run #$COUNTER)"
        echo "Performing N+1 logswitch"
        sqlplus -s / as sysdba<<EOF2
        alter system switch logfile;
        alter system switch logfile;
        alter system switch logfile;
        exit;
EOF2

        $SLOBDIR/runit.sh $i 0 
        echo "Renaming AWR report"
        mv awr.txt $AWRDIR/SLOB-AWR-$TYPE-$COUNTER.$i.0
        COUNTER=$((COUNTER+1))
    done )
done

Read-Write Performance

Before running the read-write performance test, I replaced my redo logs with three 64M files on raw devices. I also decreased db_cache_size to 128M to help make sure reads are fulfilled from physical disk instead of cache. I left log_buffer as-is from the redo testing.

#!/bin/sh
#
# Usage: ./testrw.sh FS RUNCOUNT
#
# Script assumes you have pre-created exactly three small (64M) redo logs 
# Script will create SLOB schema users in tablespace named $FS, and run each test $RUNCOUNT times

TYPE=$1
NUMRUNS=$2
BASEDIR=/oracle/SLOB
SLOBDIR=$BASEDIR/SLOB
DROPUSERS=$SLOBDIR/drop_users.sql
AWRDIR=$SLOBDIR/SLOBAWR
COUNTER=1

if [ -z "$TYPE" -o -z "$NUMRUNS" ]; then
    echo "Usage: $0 FS RUNCOUNT"
    exit 1
fi

mkdir $AWRDIR >& /dev/null

echo "Starting SLOB read/write performance testing for $TYPE ($NUMRUNS runs)"
echo "Assuming three redo logs exist"

echo "Dropping existing users and bouncing database"

sqlplus -s / as sysdba<<EOF
@$DROPUSERS;
shutdown immediate;
startup;
exit;
EOF

echo "Setting up SLOB user schemata"
$SLOBDIR/setup.sh $TYPE 32

echo "Throwaway SLOB run to prime database"
$SLOBDIR/runit.sh 4 4

for i in 8 16 32; do (
    while [ $COUNTER -le $NUMRUNS ]; do
        echo "Running SLOB for $i read/write sessions (run #$COUNTER)"
        echo "Performing N+1 logswitch"
        sqlplus -s / as sysdba<<EOF2
        alter system switch logfile;
        alter system switch logfile;
        alter system switch logfile;
        alter system switch logfile;
        exit;
EOF2

        READERS=$((i/2))
        WRITERS=$READERS
        $SLOBDIR/runit.sh $WRITERS $READERS
        echo "Renaming AWR report"
        mv awr.txt $AWRDIR/SLOB-AWR-$TYPE-$COUNTER.$WRITERS.$READERS
        COUNTER=$((COUNTER+1))
    done )
done
Advertisements

3 thoughts on “Benchmarking filesystem performance for Oracle using SLOB (Part 1.5): Improved test scripts

  1. Pingback: Benchmarking filesystem performance for Oracle using SLOB | Pardy DBA

  2. Pingback: Benchmarking filesystem performance for Oracle using SLOB (Part 2) – Results! | Pardy DBA

  3. Pingback: SLOB.R v0.6: An R script for analyzing SLOB results from repeated runs | Pardy DBA

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s