Friday, May 15, 2009

DRBD

from: http://www.drbd.org/ and http://oss.linbit.com/drbd/

DRBD® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based raid-1.

In the illustration above, the two orange boxes represent two servers that form an HA cluster. The boxes contain the usual components of a Linux™ kernel: file system, buffer cache, disk scheduler, disk drivers, TCP/IP stack and network interface card (NIC) driver. The black arrows illustrate the flow of data between these components.

The orange arrows show the flow of data, as DRBD mirrors the data of a high availably service from the active node of the HA cluster to the standby node of the HA cluster.

→ Continue with What is HA

DRBD® and the DRBD® logo are trademarks or registered trademarks of LINBIT® in Austria, the United States and other countries. 


DRBD is a block device which is designed to build high availability clusters.    This is done by mirroring a whole block device via (a dedicated) network. You    could see it as a network raid-1.        DRBD takes over the data, writes it to the local disk and sends it to the other    host. On the other host, it takes it to the disk there.        The other components needed are a cluster membership service, which is supposed    to be heartbeat, and some kind of application that works on top of a block    device.        Each device (DRBD provides more than one of these devices) has a state, which    can be 'primary' or 'secondary'. On the node with the primary device the    application is supposed to run and to access the device (/dev/drbdX; used to be    /dev/nbX). Every write is sent to the local 'lower level block device' and to    the node with the device in 'secondary' state. The secondary device simply    writes the data to its lower level block device. Reads are always carried out    locally.        If the primary node fails, heartbeat is switching the secondary device into    primary state and starts the application there. (If you are using it with a    non-journaling FS this involves running fsck)        If the failed node comes up again, it is a new secondary node and has to    synchronise its content to the primary. This, of course, will happen whithout    interruption of service in the background.         And, of course, we only will resynchronize those parts of the device that    actually have been changed. DRBD has always done intelligent resynchronization    when possible. Starting with the DBRD-0.7 series, you can define an "active    set" of a certain size. This makes it possible to have a total resync time of    1--3 min, regardless of device size (currently up to 4TB), even after a hard    crash of an active node.         The ChangeLogs can be found here:    http://git.drbd.org/?p=drbd-8.0.git;a=blob;f=ChangeLog;hb=HEAD    http://git.drbd.org/?p=drbd-8.2.git;a=blob;f=ChangeLog;hb=HEAD    http://git.drbd.org/        The DRBD Homepage is http://www.drbd.org/.

No comments: