How to Setup and Run a Review Board Instance on OpenShift

As of last week, I am officially an intern for Red Hat, Inc., tasked with extending the Reviewboard code review system. Currently, I am working on adding API tokens and OpenID login support to ReviewBoard, primarily to improve the usability of ReviewBoard in the Fedora and Red Hat infrastructures.

I have primarily been busy with reading up on the code, and getting a demonstration instance up so people can follow my progress as-I-go. Because of my previous experiences with OpenShift, I decided to use that to host this development instance.

Since other people might want to do this as well, I will give a short explanation of how to accomplish this.

Development instance on OpenShift

There is an example available of how to get ReviewBoard into OpenShift, but that example uses the PyPI package, and thus the production release, of ReviewBoard.

This was unsuitable for me, as I want to use it to demonstrate my current progress to my manager, and thus need the latest development instances of djblets (a library written by the same authors with some reusable Django extensions) and reviewboard itself.

For this setup, there is an explanation from the ReviewBoard maintainers to set up such an instance, but off course I did not want to use OpenShift as a normal host, and wanted to use the official git repository to host everything.

For starters, as I did not want to use the normal wsgi server with central pip, I used the Do it Yourself(DiY) cartridge.

As I wanted to be able to control which versions of djblets and openshift it uses, I decided to add those repositories as git submodules, which gave me the following (sanitized) directory structure in the OpenShift git repository:

.
|-- djblets
|-- .openshift
|-- rbtools
|-- reviewboard

Note: if you did this, you could just put my deploy/start/stop scripts in place and be done. If you want to do this, feel free to skip my explenations and head to the Download section.

Deploy script

After this, I had to build the deploy/start/stop scripts for OpenShift, to get the instance to setup everything and start/stop on push.

In the deploy script, I create my own custom virtual environment, because the DiY cartridge does not know what programming language you will use, so I start by creating one manually:

if [ ! -d "$OPENSHIFT_DATA_DIR/venv" ];
then
        mkdir $OPENSHIFT_DATA_DIR/venv
        virtualenv $OPENSHIFT_DATA_DIR/venv
        source $OPENSHIFT_DATA_DIR/venv/bin/activate
fi

Because the default python egg cache is in a custom home directory folder, I do not have write permissions on it, so I create that as well manually:

export PYTHON_EGG_CACHE="$OPENSHIFT_DATA_DIR/eggs"
if [ ! -d "$PYTHON_EGG_CACHE" ];
then
        mkdir $PYTHON_EGG_CACHE
fi

After this, I can finally build all of the components as specified in the upstream documentation:

(
        source $OPENSHIFT_DATA_DIR/venv/bin/activate
        cd djblets
        python setup.py develop
)
(
        source $OPENSHIFT_DATA_DIR/venv/bin/activate
        cd rbtools
        python setup.py develop
)

For ReviewBoard, this was a bit harder, as I have to initialize the database. I decided to do this on every deploy, and create the database in the repository directory. This makes OpenShift delete the database on every push, which makes sure that every time I start with a completely empty database.

(
        source $OPENSHIFT_DATA_DIR/venv/bin/activate
        cd reviewboard
        python setup.py develop
        python ./contrib/internal/prepare-dev.py <../dbinput
)

The dbinput file mentioned in the last line is a file with the details for the default user as requested by the superuser creation script.

The contents of this file are in the following format:

yes
(username)
(email)
(password)
(password)

Make sure to end the file with an empty file, as otherwise the database creation script will get an EOF before it can read the password verification.

Start script

In the start script, I first initialize some environment variables and activate the virtual environment created in deploy. I also set HOME to a path in the TMP directory, as I want to make sure that everything gets created there, and is cleared every time I deploy, so I get a clean deployment every time I push. Python uses $HOME to store some more python information.

export PYTHON_EGG_CACHE="$OPENSHIFT_DATA_DIR/eggs"
source $OPENSHIFT_DATA_DIR/venv/bin/activate
export HOME=$OPENSHIFT_TMP_DIR/data
rm -rf $HOME
mkdir $HOME

Now the only thing remaining to do is... yeah, actually starting it!

nohup python ./reviewboard/manage.py runserver $OPENSHIFT_INTERNAL_IP:$OPENSHIFT_INTERNAL_PORT > $OPENSHIFT_HOMEDIR/diy-0.1/logs server.log 2>&1 &

That was easy, was it not?

Stop script

In the stop script, I just stop every process which contains "runserver" in its command line. I do not have to be afraid to stop processes started by other users, thanks to the magic that is SELinux containers.

kill `ps -ef | grep runserver | grep -v grep | awk '{ print $2 }'` > /dev/null 2>&1
exit 0

If anyone knows a better way to do this, feel free to email me or put it in the comments.

What's Next?

More from this author