Skip to main content

The architecture of the DW mud server environment

Once upon a time in a reality far far away Muds had a pretty simple architecture. You had the mud driver which ran the mudlib and all the files sat on a shell acount on a unix box somewhere. You ran the driver, told it where the mudlib was and opened it up to the world.

Well times have changed a bit. As things have got more and more complex various support services have been added through external programs and the like. Additionally our demands of CPU and IO access mean that we need to run on a dedicated host rather than sit in a shell account. However at it's core the mud is still a combination of driver and mudlib and that places certain restrictions on everything.

People often ask about what it takes to run the mud in this day and age so the information below should be read in conjunction with the diagrams that can be found at http://discworld.starturtle.net/lpc/about/DWArch-Public.pdf (you may need to use CTRL-SHIFT-+ on loading to rotate to the correct view) and also in the knowledge that not everything is documented yet and this is still a work in progress.

At the core of DiscworldMud, as with any LPMud, you have the driver which in our case is the excellent FluffOS maintained by our very own Wodan. Fluffos provides the core services to allow the DW Mudlib to run on top of it providing the game interface you all know and love (or hate as the case may be). The story doesn't end there however! The mudlib provides a number of services beyond just the telnet interface we login to, the most visible of these is the Webserver. The DWlib webserver is written entirely in LPC, the interpreted language provided by the driver, and because of this and because it runs inside the mud environment it can access all that funky information stored inside the MUDs data structures like character information and message board contents. The other most visible service is the FTP server, again written in LPC, that allows creators to upload and download projects they are working on.

The mudlib also depends on a range of external services provided by the hosting environment. One of the most used of these is probably the RCS or Revision Control System, this system allows creators to track the revisions they make to projects and to easily rollback to previous version in the event of serious problems. We also have links to programs like grep and zcat for performance reasons as well as some custom programs like the ones we use for handling some of the path routing for NPCs and also we have the Wiki pages which are handled by the well used MediaWiki. Finally there are the two big beasts - MySQL and Apache. MySQL is used to store an awful lot of information for the mud, things such as bug reports and Osric votes as well as npc routing information, lookmap information and also all the Wiki data. Apache is obviously used for serving up the web pages that MediaWiki generates but also the webpages the Mud generates.

But wait a second - didn't I say that the mudlib provides its own webserver?

It does, but we can only run a single website on port 80 for any given hostname. Enter mod_proxy which allows us to internally proxy the DW mudlib webserver into the Apache webserver. As a result whenever you see a webpage from the Mud that has "lpc/" as the very first directory in the URL then it's coming from the muds internal webserver, if it's not there then it's coming directly from Apache [which usually means a wiki page]. There are also some custom Apache modules that allow us to authenticate webpages served by apache, like the Wiki, directly from your MUD credentials.

Moving on to the hardware DW has come a long way from simply running from a shell account on a server somewhere. This has been driven by a variety of factors including cost of hardware, demands for speed on CPU and disk IO, what we can actually get our hands on and what is in fact practical!

Currently DW runs on virtualised hardware. The server farm consists of two hosts running VMware ESXi 4.1 which run a variety of other tasks other than Discworld. Each host has a connection to the VLAN with the publically routable internet address space and an additional 1Gbps connection into the internal network for management and access to the iSCSI storage. Storage is provided by a Thecus SAN which has a 0.6TB RAID5 array of low spindle count discs and is connected into the network via 2 x 1Gbps connections bonded into a single 2Gbps connection. The internet connection is provided by Zen Internet as is surprisingly low speed at 512K down/256K up. There is a second internet connection from Virgin Media which is primarily for Sojans personal net access but also provides the connection point for the IPv6 tunnel provided by tunnel broker SIXXS and also provides a point of remote access for remote administration and maintenance. The firewalling and internet routing is handled by a Cisco Systems router.

The specific VM that Discworld runs within is configured with 4GB of memory and up to 2.7GHz of CPU time although typically it uses less than 2GHz, which is where its reservation is set. The host itself runs SLAMD64 - a 64 bit version of Slackware, this has now been discontinued as Slackware has an official 64bit version but we have not migrated to it yet. There are several virtual discs to store data and boot from and the specific volume the mudlib resides on is an LVM volume that allows us to seamlessly snapshot the disc and perform backups on static data allowing them to happen very slowly minimising the impact on the Mud. The host also provides a ramdisk - the mud uses this ramdisk to cache often used data allowing much faster access speeds and reducing the potential for IO bottlenecks.

So that's the basic structure - it's likely to grow even furthur over time but for now just getting it all tied down, stable and as redundant as is practical is the main target.