Author Archives: Glyn Astill

About Glyn Astill

Interested in programming and databases.

Installing Greenplum Community Edition Database 4.2 on Debian Wheezy

Today I wanted to have a go with Greenplum Database; a massively parallel DBMS based on PostgreSQL for data warehouse type workloads with features such as columnar storage.

Other than its PostgreSQL roots I know very little about Greenplum, and like many of the other commercial database products it too is only supported on the “Enterprise” linuxes.

The intention of this little exercise is to get Greenplum running on Debian, and learn a little more about it in the process. As I mentioned Greenplum is a massively parallel DBMS, so it can spread one query over multiple cores in multiple servers – I’m only going to attempt to set it up on a single machine, but installing over a whole bunch of servers should be possible with the same technique simply by adding more hosts to the various “hostfiles” passed into the Greenplum tools.

I’m following the Installation Guide from the Greenplum website and start off by installing a few prerequisites for the installation. These are; ed and unzip which are used by gpinitsystem, the former to modify parameters in postgresql.conf files, and ntp which is required by Greenplum to keep segment hosts in sync. The others are not required, but I like to do my work in a screen, and I installed the python modules whilst fiddling with gpssh-exkeys but I’m pretty certain they’re not needed.

# apt-get install ntp ed unzip python-crypto python-paramiko screen

Next we grab the installer from Greenplum and extract it:

$ mkdir /usr/gp_install
$ cd /usr/gp_install
$ wget http://bitcast-a.v1.o1.sjc1.bitgravity.com/greenplum/Greenplum_CE_Database/database_server/4.2.2.4/greenplum-db-4.2.2.4-build-1-CE-RHEL5-x86_64.zip
$ unzip greenplum-db-4.2.2.4-build-1-CE-RHEL5-x86_64.zip

Create an installation directory:

$ mkdir /usr/local/greenplum-db-4.2.2.4

Add the following kernel parameters to /etc/sysctl.conf. These are as recommended by the installation document and I assume that the values for shared memory are based on their default postgresql.conf settings, but they’ve obviously made a bit of a guess here, and depending on our hardware / setup we’d want to tune these further:

kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 250 512000 100 2048
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1025 65535
net.core.netdev_max_backlog = 10000
vm.overcommit_memory = 2

Increase the limits for open files and running processes in /etc/security/limits.conf and uncomment the line with pam_limits.so in /etc/pam.d/su so that su uses these limits.

* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072

The installation guide also recommends setting the IO scheduler to deadline and a readahead size of 16384 sectors (or 8kb). Again this is very dependent on hardware (for example I’d be wanting to set the scheduler to no-op on a good raid controller) so I find it odd they suggest such cludgy values, best to do your own testing here. Anyway, they suggest the following in /etc/rc.local:

 
$ sudo blockdev --setra 16384 /dev/sda
$ sudo echo deadline > /sys/block/sda/queue/scheduler

In order to keep track of hosts Greenplum either needs static IPs or DNS records. My VM uses DHCP so I’ve quickly added a secondary IP in /etc/network/interfaces with a static address, but I could just as easily have changed the main IP to static:

auto eth0:1
iface eth0:1 inet static
address 192.168.0.201
netmask 255.255.255.0
gateway 192.168.0.1
$ sudo invoke-rc.d networking restart

Now make sure the machines hostname is resolvable to that IP if it’s not in DNS add an entry to your /etc/hosts file (one for each server, on every server if you’re doing a multiple server setup):

192.168.0.201   greenplum_test.8kb.co.uk  greenplum_test

So now we’re almost ready to install Greenplum, however if we try to run the installer at this point we’ll get an error message stating “Installer will only install on RedHat/CentOS x86_64”, it’s easy enough to trick it if we want to though by creating a redhat-release file (you can delete it when you’re done with the installer):

$ sudo echo 'Not really Red Hat Enterprise Linux but Debian GNU/Linux 7.0' > /etc/redhat-release

Now we can run the installer, the default install directory is /usr/local/greenplum-db-4.2.2.4 and the installer creates a symbolic link /usr/local/greenplum-db to it.

$ sudo /bin/bash greenplum-db-4.2.2.4-build-1-CE-RHEL5-x86_64.bin

Once the installer has run we proceed by setting up the segments that will make up our Greenplum cluster. To install a segment the gpseginstall command is run, and the hostfile_exkeys file needs to contain the hostnames of all segment hosts in the Greenplum cluster. For a single host we just need our local hostname in the hostfile_exkeys file:

# cd /usr/local/greenplum-db
# sudo echo `hostname` > hostfile_exkeys

The gpseginstall command copies the relevant files over to each segment, sets up users and directory ownership. For a standalone server this doesn’t actually have to be run and can be done manually, but for simplicity and completeness this is how it goes:

# . /usr/local/greenplum-db/greenplum_path.sh
# gpseginstall -f hostfile_exkeys -u gpadmin -p mypassword

When I ran this I got one error when it tried to run gpssh-exkeys for the gpadmin user, essentially the gpadmin user didn’t have $GPHOME environment set, so I ran this again myself with an absolute path:

# su - gpadmin
$ chsh         # changed my shell to /bin/bash
$ . /usr/local/greenplum-db/greenplum_path.sh
$ /usr/local/greenplum-db-4.2.2.4/bin/gpssh-exkeys -f hostfile_exkeys

Now make sure the environment variables are set up for your gpadmin user by adding “. /usr/local/greenplum-db/greenplum_path.sh” (the dot is important – it runs the script in the current environment) in /home/gpadmin/.profile.

We can test if the ssh keys have been exchanged properly at this point too, if this doesn’t authenticate us then we need to repeat the above to find where we went wrong:

# su - gpadmin
$ gpssh -f hostfile_exkeys -e ls -l $GPHOME

Next we need to create our data storage area; I’m setting up my data to be mirrored on the same disk. However if we were doing this for real then we’d be putting the mirrors on separate arrays or ideally if we had multiple hosts we could pass the -S option to gpinitsystem to make sure it spreads the mirrors about across alternate hosts.

# mkdir /greenplum
# mkdir /greenplum/master
# mkdir /greenplum/data1
# mkdir /greenplum/data2
# mkdir /greenplum/data1/primary
# mkdir /greenplum/data1/mirror
# mkdir /greenplum/data2/primary
# mkdir /greenplum/data2/mirror
# chown -R gpadmin:gpadmin /greenplum

Before we run gpinitsystem we can now check that everything is set up correctly by running the gpcheck utility, again some of the warnings (like it whinging you’re not using the deadline IO scheduler) can be ignored:

# cp /usr/local/greenplum-db/hostfile_exkeys /usr/local/greenplum-db/hostfile_gpcheck
# su - gpadmin
$ gpcheck -f /usr/local/greenplum-db/hostfile_gpcheck -m greenplum_test

Now we are ready to run gpinitsystem, we can copy over a hostfile and the sample gpinitsystem_config.

# cp /usr/local/greenplum-db/hostfile_exkeys /greenplum/hostfile_gpinitsystem
# cp /usr/local/greenplum-db/docs/cli_help/gpconfigs/gpinitsystem_config /greenplum/gpinitsystem_config

For our setup with 2 segments and 2 mirror segments we edit the /greenplum/gpinitsystem_config file as follows:

ARRAY_NAME="EMC Greenplum DW"
SEG_PREFIX=gpseg
PORT_BASE=40000
MIRROR_PORT_BASE=50000
declare -a DATA_DIRECTORY=(/greenplum/data1/primary /greenplum/data2/primary)
declare -a MIRROR_DATA_DIRECTORY=(/greenplum/data1/mirror /greenplum/data2/mirror)
MASTER_HOSTNAME=greenplum_test
MASTER_DIRECTORY=/greenplum/master
MASTER_PORT=5432
REPLICATION_PORT_BASE=41000
MIRROR_REPLICATION_PORT_BASE=51000
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=8
ENCODING=UNICODE

Now for the exciting part, we run gpinitsystem to setup the cluster (set the locale to what suits you best):

# su - gpadmin
$ gpinitsystem -c /greenplum/gpinitsystem_config -h /greenplum/hostfile_gpinitsystem --locale=en_GB.utf8

Once this has run, we just need to set up a few bits and pieces before we can start the server. We need to make sure the MASTER_DATA_DIRECTORY environment variable is set for our gpadmin user by adding the following in our /home/gpadmin/.profile (or if you prefer /home/gpadmin/.bashrc):

export MASTER_DATA_DIRECTORY=/greenplum/master/gpseg-1

If we wanted we could now start the server using gpstart and stop it with gpstop. However I want to create an init script, if you want this put the following in /etc/init.d/greenplum

#!/bin/sh

### BEGIN INIT INFO
# Provides:          greenplum
# Required-Start:    $network $local_fs $remote_fs $syslog
# Required-Stop:     $network $local_fs $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start greenplum daemons at boot time
# Description:       Enable greenplum service.
### END INIT INFO

# Run the greenplum path script script
. /usr/local/greenplum-db/greenplum_path.sh

# Who to run the postmaster as, usually "postgres".  (NOT "root")
GPUSER=gpadmin
GPLOG="$GPHOME/serverlog.log"
export MASTER_DATA_DIRECTORY=/greenplum/master/gpseg-1

set -e


# Parse command line parameters.
case $1 in
  start)
	echo -n "Starting Greenplum RDMS: "
	su -l gpadmin -c "gpstart -a" >>$GPLOG 2>&1
	echo "ok"
	;;
  stop)
	echo -n "Stopping Greenplum RDMS: "
	su -l gpadmin -c "gpstop -a -M fast" >>$GPLOG 2>&1
	echo "ok"
	;;
  restart)
	echo -n "Restarting Greenplum RDMS: "
	su -l gpadmin -c "gpstop -a -M fast -r" >>$GPLOG 2>&1
	echo "ok"
	;;
  reload)
	echo -n "Reload Greenplum RDMS: "
	su -l gpadmin -c "gpstop -u" >>$GPLOG 2>&1
	echo "ok"
	;;
  status)
	su -l gpadmin -c "gpstate"
	;;
  *)
	# Print help
	echo "Usage: $0 {start|stop|restart|reload|status}" 1>&2
	exit 1
	;;
esac

exit 0

And set it to run at boot:

# chmod 755 /etc/init.d/greenplum
# update-rc.d greenplum defaults

We can now, finally start up the server, and as the gpadmin user execute some SQL:

#invoke-rc.d greenplum start
# su - gpadmin
$ psql -U gpadmin -d postgres

Lets create a bog standard table and see what we get:

[postgres]
psql (8.2.15)
Type "help" for help.

postgres=# CREATE DATABASE test;
postgres=#\c test
test=# CREATE TABLE test_table (id bigserial PRIMARY KEY NOT NULL,string text);
test=# INSERT INTO test_table (string) SELECT generate_series(1,10000,2) || ‘_test_data’ AS string;
test=# EXPLAIN ANALYZE SELECT * FROM test_table;
QUERY PLAN
—————————————————————————————————————————————-
Gather Motion 2:1 (slice1; segments: 2) (cost=0.00..58.00 rows=2500 width=22)
Rows out: 5000 rows at destination with 24 ms to first row, 109 ms to end, start offset by 66 ms.
-> Seq Scan on test_table (cost=0.00..58.00 rows=2500 width=22)
Rows out: Avg 2500.0 rows x 2 workers. Max 2500 rows (seg0) with 3.572 ms to first row, 29 ms to end, start offset by 80 ms.
Slice statistics:
(slice0) Executor memory: 247K bytes.
(slice1) Executor memory: 181K bytes avg x 2 workers, 181K bytes max (seg0).
Statement statistics:
Memory used: 128000K bytes
Total runtime: 202.259 ms
(10 rows)

[/postgres]

Looks a little different to the usual PostgreSQL seq_scan; we’ve two workers scanning the two underlying segments in parallel. Lets try with some more Greenplum specific options:

[postgres]

test=# CREATE TABLE test_table2 (id bigserial NOT NULL, string text) WITH (appendonly=true, orientation=column) DISTRIBUTED BY (id);
test=# CREATE INDEX test_table2_id ON test_table2 USING btree (id);
test=# INSERT INTO test_table2 (id, string) SELECT generate_series, generate_series || ‘_test_data’ AS string FROM generate_series(1,10000,2);
test=# EXPLAIN ANALYZE SELECT * FROM test_table2;
QUERY PLAN
—————————————————————————————————————————————-
Gather Motion 2:1 (slice1; segments: 2) (cost=0.00..54.00 rows=2500 width=22)
Rows out: 5000 rows at destination with 28 ms to first row, 114 ms to end, start offset by 66 ms.
-> Append-only Columnar Scan on test_table2 (cost=0.00..54.00 rows=2500 width=22)
Rows out: Avg 2500.0 rows x 2 workers. Max 2560 rows (seg0) with 9.701 ms to first row, 28 ms to end, start offset by 81 ms.
Slice statistics:
(slice0) Executor memory: 183K bytes.
(slice1) Executor memory: 363K bytes avg x 2 workers, 363K bytes max (seg0).
Statement statistics:
Memory used: 128000K bytes
Total runtime: 207.515 ms
[/postgres]

This time we created the table with column based storage as appose to the more conventional row based storage we’re used to, and partitioned over the segments by “id”. Also note that we state the table is append-only, AFAIK this is a requirement when using columnar storage with this version of Greenplum.

We may want to change some guc parameters in postgresql.conf or more likely add in some hosts to pg_hba.conf. To do this we want to edit the settings for the master host, in this setup those files are in my /greenplum/master/gpseg-1/ directory.

That’s it for now, hopefully I’ll get time to revisit and have a more indepth play with Greenplums features.

Installing Ingres 10 Community Edition on Debian Squeeze (and Wheezy)

Further to my little exercise yesterday installing Oracle 11g on Debian, I thought that I’d have ago at installing Ingres. Lead by Michael Stonebraker at Berkerley, Ingres was the predecessor to the post-Ingres research project that later evolved into PostgreSQL.

Just as per my previous post we start out with a vanilla install of Debian Squeeze 6.0 from the netinst CD image. Without thinking I went ahead and installed a few of the packages I normally like to have on my servers, most of them are probably not required:

# apt-get install module-assistant build-essential zip unzip bzip2 apt-show-versions samba rsync ntp ntpdate screen sysstat dstat sudo iotop linux-headers-$(uname -r) cvs git-core

Next we fetch the installer:

# mkdir /usr/ingres_installer
# cd /usr/ingres_installer
# wget http://esd.actian.com/product/Community_Projects/Ingres_Database/Linux_X86_64-bit/Ingres_10.1_Build_125/ingres-10.1.0-125-gpl-linux-ingbuild-x86_64.tgz
# tar xzf ingres-10.1.0-125-gpl-linux-ingbuild-x86_64.tgz 
# rm ingres-10.1.0-125-gpl-linux-ingbuild-x86_64.tgz

And create a user for ingress to run as:

# adduser --uid 1001 --ingroup daemon --disabled-login ingres (no password, Full name "Ingres Daemon")
# groupadd ingres --gid 1001
# usermod -a -G ingres ingres

Then run the installer, the default install directory is /opt/Ingres/:

# cd ingres-10.1.0-125-gpl-linux-ingbuild-x86_64
# ./ingres_express_install.sh

The installer sets everything up, all we’re left to do is create any init scripts we want and set up our environment.

Next we add the following to the .bashrc of our ingres user, or to the global profile in /etc/profile to setup the environment:

export II_SYSTEM=/opt/Ingres/IngresII
export PATH=$PATH:$II_SYSTEM/ingres/bin:$II_SYSTEM/ingres/utility
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$II_SYSTEM/ingres/lib
export TERM_INGRES=konsolel

To start and stop ingres we can use the “ingstart” and “ingstop” commands respectively, I created the following init script in /etc/init.d/ingres:

#! /bin/sh

### BEGIN INIT INFO
# Provides:          ingres
# Required-Start:    $network $local_fs $remote_fs $syslog
# Required-Stop:     $network $local_fs $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start ingres daemon at boot time
# Description:       Enable ingres service.
### END INIT INFO

# chkconfig: 2345 98 02
# description: Ingres RDBMS

# Place this file at /etc/init.d/ingres (or
# /etc/rc.d/init.d/ingres) and make symlinks to
#   /etc/rc.d/rc0.d/K02ingres
#   /etc/rc.d/rc1.d/K02ingres
#   /etc/rc.d/rc2.d/K02ingres
#   /etc/rc.d/rc3.d/S98ingres
#   /etc/rc.d/rc4.d/S98ingres
#   /etc/rc.d/rc5.d/S98ingres
# Or, if you have chkconfig, simply:
# chkconfig --add ingres
#

# Who to run ingstart as, usually "ingres".  (NOT "root")
IGUSER=ingres

# The prefix of the ingres install
II_SYSTEM="/opt/Ingres/IngresII"

# Start script logfile
IGLOG="$II_SYSTEM/ingres/serverlog"

set -e

# Parse command line parameters.
case $1 in
  start)
	echo -n "Starting Ingres: "
	su - $IGUSER -c "date" >>$IGLOG 2>&1
	su - $IGUSER -c "ingstart" >>$IGLOG 2>&1
	echo "ok"
	;;
  stop)
	echo -n "Stopping Ingres: "
	su - $IGUSER -c "date" >>$IGLOG 2>&1
	su - $IGUSER -c "ingstop" >>$IGLOG 2>&1
	echo "ok"
	;;
  restart)
	echo -n "Restarting Ingres: "
	su - $IGUSER -c "date" >>$IGLOG 2>&1		
	su - $IGUSER -c "ingstop" >>$IGLOG 2>&1
	su - $IGUSER -c "ingstart" >>$IGLOG 2>&1
	echo "ok"
	;;
  status)
	su - $IGUSER -c "ingstatus"
	;;
  *)
	# Print help
	echo "Usage: $0 {start|stop|restart|status}" 1>&2
	exit 1
	;;
esac

exit 0

And set it to run at startup:

# chmod 755 /etc/init.d/ingres
# chown root:root /etc/init.d/ingres
# update-rc.d ingres start 24 2 3 4 5 . stop 25 0 1 6

And that’s it, “invoke-rc.d ingres start” and we’re up and running.

Create a test database:

# createdb test

This creates the database named test in $II_SYSTEM/ingres/data/default/ by default

Connect to the database and give it a whirl:

# sql test
* CREATE TABLE test (id integer, desc varchar(50))\g
* INSERT INTO test VALUES (1, 'test record 1'),(2, 'test record 2')\g
* SELECT * FROM test\g

Update 16/03/2013 – I’ve upgraded this machine from Squeeze to Wheezy without any issues, I assume the Ingres setup is pretty standalone so the install process should be the same.

Installing Oracle 11g on Debian Squeeze (and Wheezey)

Over the years I’ve had exposure to various popular DBMSes; mainly PostgreSQL, MySQL, Microsoft SQL server and a little bit of PervasiveSQL/Btrieve. One DBMS that I’ve never had to touch is Oracle, and perhaps that’s a good thing, but it can’t harm to get a little bit of insight into a technology that is used so widely.

Being well versed with PostgreSQL should be a good start. Despite the various grammatical, functional and architectural differences there are a lot of similarities, and the PostgreSQL wiki has an excellent guide intended for Oracle DBAs which works just as well in reverse for PostgreSQL DBAs.

One thing Oracle doesn’t have in common however, is the wide choice of “supported” host operating systems that PostgreSQL does (Check out the PostgreSQL build farm); and my personal preference of Debian GNU Linux isn’t one of them! (At least according to Oracle FAQ’s)

Despite Debian not being supported it shouldn’t be too difficult to get it to work if it works on other Linux flavours, and this post summarises my attempt. A lot of the process comes from a great guide I found on Debian Administration, so to a great degree I’m regurgitating that adding in a little along the way.

To begin with we start out with a vanilla install of Debian Squeeze 6.0 from the netinst CD image, on top of that there’s a few libraries that Oracle depends on we need to install:

# apt-get install gcc make binutils libmotif4 lesstif2 rpm libaio1 libdb4.6 unzip libstdc++5 libstdc++6 g++

And we want bash as our default shell:

# dpkg-reconfigure dash

Next we need to make some parts of the directory structure look more like Red Hat, we can do that with a few soft links as follows:

# ln -s /usr/bin/awk /bin/awk
# ln -s /usr/bin/rpm /bin/rpm
# ln -s /usr/bin/basename /bin/basename
# ln -s /etc /etc/rc.d

Next we set up a group and user for the Oracle install:

# groupadd oinstall
# groupadd dba
# useradd -m -g oinstall -G dba -p passwd -s /bin/bash -d /home/oracle oracle
# passwd oracle

I’ve read that it’s typical to use a prefix of /u01 for the Oracle binaries, and optionally put the data directories on their own mount points following the same pattern /u0n etc. If this was a production system those data directory mount points would be on well tested and reliable raid arrays, but for this test I’m going to keep everything in /u01 which just symlinks into the home directory:

# mkdir /home/u01
# ln -s /home/u01 /
# chown -R oracle.oinstall /home/u01 /u01
# chmod -R 775 /home/u01 /u01

Create a directory for the installer, place the files in there and make sure the installer has priviliges:

# mkdir /usr/oracle_installer
# chown -R oracle.oinstall /usr/oracle_installer

Next we setup some of Oracles preferred kernel parameters, certainly if you’re used to other linux databases this will be familiar to you; shared memory, semaphores, max open files, huge pages etc. My test VM has 4Gb of memory, and I’ve set my maximum shared memory segment size to a conservative 248Mb, and put in the group id of the Oracle dba group to use hugepages. Place these in /etc/sysctl.conf and reload with sysctl -p:

kernel.sem = 250 32000 100 128
kernel.shmmax = 260513792
net.ipv4.ip_local_port_range = 9000 65000
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
fs.file-max = 6815744
vm/hugetlb_shm_group = 1002	

Increase the system wide user limits for open files and running processes in /etc/security/limits.conf as follows:

*               soft    nproc   2047
*               hard    nproc   16384
*               soft    nofile  1024
*               hard    nofile  65536

You can check these as by running ulimit with any of the -Hu/-Su/-Hn/-Sn flags as the oracle user.

We also need to make sure the above limits are used, so make sure pam_limits.so is not commented out in the following files:

/etc/pam.d/su
/etc/pam.d/login
/etc/pam.d/sshd

Now edit /etc/profile to set the process and file limits for the oracle user, I’m sure this could also be done in the “oracle” users .profile too.

umask 022
if [ $USER = "oracle" ]; then
      if [ $SHELL = "/bin/ksh" ]; then
            ulimit -p 16384
            ulimit -n 65536
      else
            ulimit -u 16384 -n 65536
      fi
fi

Edit /home/oracle/.profile and set the environment variables required for the installer

ORACLE_BASE=/u01/app/oracle
ORACLE_SID=instancename
export ORACLE_BASE ORACLE_SID
unset ORACLE_HOME
unset TNS_ADMIN

The Oracle installer is a graphical one, so unless we’ve already generated a “response” file to do a non-interactive installation we need X windows. To setup a dummy xserver and use vnc:

# apt-get install xserver-xorg-video-dummy vnc4server x11-xserver-utils xterm wm2

Run vnc4server as the oracle user and then kill it. This should make sure that /home/oracle/.vnc/xstartup exists, edit it and add the following 2 lines to the bottom. This makes sure a window manager and terminal will be started correctly:

# su oracle -
$ vnc4server
$ vi /home/oracle/.vnc/xstartup
	x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & 
	wm2 &
$ exit

Now run vnc4server again as user oracle

$ vnc4server

Now we connect to the server with vnc and run the installer from a terminal in X windows:

$ cd /usr/oracle_installer
$ ./runInstaller

Once the installation is complete, we need to change /home/oracle/.profile again ready for running Oracle rather than the installer:

ORACLE_BASE=/u01/app/oracle
ORACLE_SID=orcl				#(instancename)
ORATAB=/etc/oratab
export PATH=$PATH:/u01/app/oracle/product/11.2.0/dbhome_1/bin/
export ORACLE_BASE ORACLE_SID ORATAB
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
unset TNS_ADMIN

Here’s the init script for starting the server:

#!/bin/bash
#
# Run-level Startup script for the Oracle Instance and Listener
#
# chkconfig: 345 91 19
# description: Startup/Shutdown Oracle listener and instance

### BEGIN INIT INFO
# Provides:          oracle
# Required-Start:    $remote_fs $syslog $all
# Required-Stop:
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Oracle database server
# Description:       Oracle database server
### END INIT INFO

ORA_HOME="/u01/app/oracle/product/11.2.0/dbhome_1"
ORA_OWNR="oracle"

# if the executables do not exist -- display error

if [ ! -f $ORA_HOME/bin/dbstart -o ! -d $ORA_HOME ]
then
        echo "Oracle startup: cannot start"
        exit 1
fi

# depending on parameter -- startup, shutdown, restart
# of the instance and listener or usage display

case "$1" in
    start)
        # Oracle listener and instance startup
        echo -n "Starting Oracle: "
        ## I think dbstart automatically starts the listener if it's not running
        ## previously the listener was started first.
        su - $ORA_OWNR -c "$ORA_HOME/bin/dbstart $ORA_HOME"
        su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl start"
        su - $ORA_OWNR -c "$ORA_HOME/bin/emctl start dbconsole"
        touch /var/lock/oracle
        echo "OK"
        ;;
    stop)
        # Oracle listener and instance shutdown
        echo -n "Shutdown Oracle: "
        su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl stop"
        su - $ORA_OWNR -c "$ORA_HOME/bin/dbshut $ORA_HOME"
        su - $ORA_OWNR -c "$ORA_HOME/bin/emctl stop dbconsole"
        rm -f /var/lock/oracle
        echo "OK"
        ;;
    reload|restart)
        $0 stop
        $0 start
        ;;
    *)  
        echo "Usage: $0 start|stop|restart|reload"
        exit 1
esac
exit 0

Put this in /etc/init.d/oracle, and set it to run at boot:

# chmod 755 /etc/init.d/oracle 
# update-rc.d oracle defaults

For the default database instance to start automatically it should be set in /etc/oratab, for the above setup this just means changing:

orcl:/u01/app/oracle/product/11.2.0/dbhome_1:N

To:

orcl:/u01/app/oracle/product/11.2.0/dbhome_1:Y

Lastly check that the listener address is set to the current host ip/domain in /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora, and if it is wrong amend it appropriately.

We should now be able to start Oracle with:

# invoke-rc.d oracle start

If we get a message stating “listener supports no services” on startup, we just need to register the server with the listener, and we can do that from sqlplus with:

# su oracle -
oracle$ sqlplus SYSTEM

[oracle11]
SQL> ALTER SYSTEM REGISTER;
[/oracle11]

So that’s it, the instance of Oracle should now be all set up and running, the Enterprise Manager should be available at https://your-ip-or-domain:1158/em/console/logon/logon, and we can get to the database command line / shell as superuser running “sqlplus SYSTEM” as the oracle user.

Update 16/03/2013 – Upgrade from Squeeze to Wheezy gives us the error: “ORA-00845: MEMORY_TARGET not supported on this system”

This is because /dev/shm has now moved to /run/shm, Debian creates a softlink from /dev/shm to /run/shm, however this is not enough for Oracle and it’s Automatic Memory Management (AMM).

The easiest solution to this is to create a startup script to do the rebinding with mount -B
(Reference: https://forums.oracle.com/forums/thread.jspa?threadID=2301639)

Create an init script /etc/init.d/oracle-shm as follows:

#! /bin/sh

### BEGIN INIT INFO
# Provides:          oracle-shm
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6 
# Short-Description: Bind /run/shm to /dev/shm at system startup.
# Description:       Fix to allow Oracle 11g use AMM.
### END INIT INFO

# /etc/init.d/oracle-shm
#
#
case $1 in
  start)
    echo "Starting script /etc/init.d/oracle-shm"
    # Run only once at system startup
    if [ -e /dev/shm/.oracle-shm ]; then
      echo "/dev/shm is already mounted, nothing to do"
    else
      rm -f /dev/shm
      mkdir /dev/shm
      mount -B /run/shm /dev/shm
      touch /dev/shm/.oracle-shm
    fi
    ;;
  stop)
    echo "Stopping script /etc/init.d/oracle-shm"
    echo "Nothing to do"
    ;;
  *)
    echo "Usage: /etc/init.d/oracle-shm {start|stop} passed $1"
    exit 1
    ;;
esac

Make the script executable and run at startup before oracle starts:

# chmod 755 /etc/init.d/oracle-shm
# update-rc.d oracle-shm defaults 01 99