Here is a subject that does not get enough attention. With RHCS/CMAN, the heartbeat of a node, by default, uses broadcast to let the other nodes know that it is alive and well, and is a member of the cluster. So, in turn, by default we are restricted to using a single network for our heartbeat, and in most cases this is fine… But, what if we want to cluster across multiple networks? Say I have node01 on 10.1.1.0/24 and node02 on 10.1.2.0/24, and I want them to be able to communicate with each other? Well, this is not as trivial a feat as you might assume… In fact, I had to employ some of the finest network technicians (well, not really — but they are pretty smart!) in the south-east to get this to work… Continue reading »

I manage a site that uses Google Apps for hosting their domain services (email, calendar, etc…), and they had a requirement to have the ability to email images to an inbox and have those images automatically inserted into a mysql database, where a forward-facing web application would later display the upload date/time, a comment about the image, and the image itself. The process needed to be fairly simple, but it also needed to use GMail’s SSL ports for pulling down the information (is there even another way to do it nowadays?)… Continue reading »

… What the? Ok, so the basic idea here is that we’re going to be creating a cluster, sharing a block device from one server to the other nodes in a cluster as a Global Network Block Device, putting it into an LVM configuration, and formatting the filesystem using Global File System for file locking between nodes… And we’re gonna do this all natively with Red Hat Clustering Suite. This is a good, low-rent implementation of block device sharing in a cluster, where iSCSI or FCP is not available to the hosts. It’s better than NFS because we get real locking mechanisms, and we get our own fencing mechanism for the cluster (which, unfortunately I won’t be covering in this post). I’ve recently had the opportunity to do this as a proof of concept and this is really cool stuff… Continue reading »

The situation goes as such, if you can visualize it, where you have a dedicated server hosted at an off-site location (disaster recovery, branch office, etc…), you have your primary location, and then you have a third party location somewhere in the middle — perhaps cloud hosting… From your primary location to your secondary site your transfer speeds are terrible and, if you’re using this for a DR site, it quickly becomes agonizing when off-loading backups or other important data. However, from your tertiary location you get great routing, great speeds, to your secondary location, and from your primary location to your tertiary location it’s as fast as you can go. At your DR site, you have an FTP server connected to a NAS, a SAN, or some other mass storage unit, and you need to be able to quickly and securely ship backups and other sensitive data over the line. With the routing problems from your primary site, you find yourself spending way too much time trying to figure out the best way to get the data from point A to point B rapidly. Somehow, you ponder, there must be a way to utilize the tertiary host as a hop in the trip from your primary site to your secondary site… Continue reading »

As Linux administrators, we generally host a range of multi-purpose scripts that are designed for the purposes of duct-taping the infrastructure together. Nobody else in the organization knows how/when/where 95% of these scripts are run and on what interval — it’s a great tactic for job security. Inevitably, we’re going to get a request at some point (and most of us already have, that’s for sure) to have a script watch one folder on a server for a file to appear, and then move it somewhere else (usually with some data or filename massaging in the process). So, all of us have our own variation of this “move-file” script… Continue reading »

On to something more interesting…

We use VMware ESX 3.5 for our virtualization solution, and I was tasked with finding a way to automate monitoring CPU/MEM usage for ESX guests. We use VirtualCenter to manage and maintain all of our VMs, and we use a Network Monitoring solution to monitor all of the devices in our infrastructure. Relying on the VMware guest to provide accurate performance data has proven unreliable in the past, but the performance data that VirtualCenter provides is reliable and is what we wanted to monitor from our centralized solution. Continue reading »

Just this past week, I was given a programming task to take a Microsoft Word template, which had been saved as an XML file (Word Markup-Language format), and auto-populate all of the bookmarks in the document with dynamic data from a database. The purpose of this task was to take the name of the bookmark (for example, “FirstNameLastName”) and populate only that field with the database data, leaving the other data untouched and untransformed. This will allow an end user to manage the static data and formatting of the document, without programming intervention. Therefore, we can have one person from each department in charge of modifying the legal wording, or wording to customers, and not have to have programming create a new document template for every change. Continue reading »

This was amazingly impossible to find the answer to when I was looking for it.

The setup — running JVM, some thread doing something stupid like taking 100% CPU usage. For the purposes of my example, I will explain that I was using Jetty to serve a homegrown web application. With an Apache front-end, requests were being passed to Jetty over the AJP13 protocol using mod_jk. Continue reading »

I think that the new authentication configuration packages that come with RHEL6 will make this a little bit easier, but I haven’t played around with it yet, so here’s what I got for making this stuff work.

Assumptions: I’m a domain administrator w/ the administrative tools pack installed (to get ADUC)

The technology: RHEL 5.x (I’ve done this on 5.1 – 5.5), Samba server and client packages installed, Windows Server 2003/Active Directory Continue reading »

I was recently presented with a situation where we would need a standard ext3 partition, greater than 2.0TB on a basic server installation of RHEL5. This was for a backup disk that would host dumps and exports from an Oracle instance on a redundant SAN LUN that was presented as a thick-provisioned 2.5TB volume. Continue reading »

© 2013 Dan's Blog Suffusion theme by Sayontan Sinha