Re: [Hampshire] High availability database

Top Page

Reply to this message
Author: Chris Simmonds
Date:  
To: adrian
CC: Hampshire LUG Discussion List
Subject: Re: [Hampshire] High availability database
Adrian Bridgett wrote:
> On Tue, Sep 15, 2009 at 17:15:18 +0100 (+0100), Chris Simmonds wrote:
>> One option I have considered is using, say, MySQL with one master node
>> replicating to all the others and some mechanism to elect a new master
>> if the original went down. But, that sounds messy. There must be a
>> neater solution?
>
> MySQL + LinuxHA + DRBD is the typical solution. You could use MySQL
> with replication and failover (or even multi-master if you are brave).
>
> A proper clustered filesystem (e.g. OCFS2 ontop of DRBD8) maybe better.
>
> However be aware that all of these aren't simple configs and you
> really want to do thorough testing beforehand - I've seen some setups
> be incredibly brittle - so any theoretical improvement in uptime may
> in practice become a major drop.
>
> For example when I was looking at GFS last year:
> http://www.smop.co.uk/blog/index.php/2008/02/11/gfs-goodgrief-wheres-the-documentation-file-system/
>
> Adrian


Hi and thanks to everyone who replied. I'm busy researching some
possibilities at the moment. However, just to clarify, the issue is high
availability among the 50 or so nodes so that any node can go down and
come back up again without impacting any of the others. There are no
servers as such, each node is potentially both a server and a client so
that if using a traditional database there needs to be some mechanism to
elect a master server if the current master goes down. The trickiest
case is if the network gets disrupted such that you get two separate
segments for a while - each with their own master - which then get
joined together again. On the other hand the amount of data is quite
small - maybe a thousand rows in SQL terms. Actually it doesn't have to
be SQL at all, I'm just using that as an example.

Anyhow, thanks again. I'll post again when I have had time to mull over
the various suggestions and tried a few things out.

Bye for now,
Chris.