Thursday, June 3, 2010

Hard Disk Sentinel

A few months back I came across this great program called 'Hard Disk Sentinel' for my Extreme Media Server (HD Sentinel's Site).

For the extreme number of hard drives that we have to manage, a program that monitors the health of your drives is essential. This program not only makes it is extremely simple, but also supports the port multipliers and SiliconImage Controllers we use.

Here's a couple screen shots from my EMS:

The first one is a summary screen that can hover in the corner showing the current status of each drive.


This one is if you enable toolbar reporting - notice all the drives are listed with their current temps - the boxes will turn to yellow and red as issues warrant.


And here is a screen shot of the main program window:

Monday, December 14, 2009

EMS "Alpha Rack"

One of the reasons I decided to build a Backblaze inspired Extreme Media Server was that I wanted to replace 4 of my custom built "mini-EMSs" (see pictures below).

With my new full sized Backblzed Inspired EMS, I'll be able to replace these 4 mini-EMSs you see below.

Please don't give me too much grief on the messy nature of this particular install - as I'm sure some can appreciate, this is definitely a constant work in progress. More a R&D type lab than a finished install :).

By the way, these mini-EMS(s) were actually built using the same basic idea of the Backblaze boxes. The main difference was that I used straight SATA backplanes which were then connected into a Port Multiplier "External" Card/Board that then finally went into the Silicon Image Controllers. While this definitely created a cabling nightmare, the solution worked and worked well.

One of the items I like with the old solution is having removable drive bays...








For those curious, the orange cables are fiber optic cables feeding the HDMI runs/extensions to various TVs in the house. The laptop above the mini-EMSs is used as a poor man's lighting controller for various items in the home. The rest, is pretty much just a fire hazard!

Don


Wednesday, December 2, 2009

EMS FAQs

I recently got an e-mail with general questions on the EMS and I thought it would be best to answer these on the blog, as I'm sure others have had similar ones. If anyone has any other input - please don’t hesitate to add.

As an aside, it has been really exciting to get all the e-mails from people working on their own EMS. While most are being used as business storage servers (practical but a bit boring!), there are a few insane folks doing it for their media server at home, which to me is the best use of all!!!

Now to the questions:

1. Streaming with Server 2008 R2. How is it done? Everything I’ve read tells me to use WHS but there is a drive limit with WHS and I want a 64-bit based system. I know nothing about Server 2008 (R2 or not) so I know I’m going to have to do some reading. Do you have any suggestions? I will be ripping my DVD and Blu-ray collection and need the ability to stream multiple discrete 1080p outputs.

I don’t have a lot of experience with the WHS product – sorry! I briefly looked at it and saw that the only item it offered to ME at least was server side support for iSCSI hosting, and that was easy enough to add to the standard Window Server Line with a 3rd party product.

This is obviously just my opinion, but I’m not a huge fan of the specialized marketing- department-created versions of Windows Server (i.e. Small Business Server, WHS, etc.). They just always seem like an afterthought and I like the flexibility I get with the full versions. Obviously, there are other considerations (licensing costs, extra “features”, etc.), but for me, I really like Server 2008 R2 x64. I also would consider Ubuntu or similar if I didn’t have access to the Microsoft stuff. No matter what though, you definitely want to stick with a 64 Bit OS for the server.

Take a look at the answer to #3 for more details on streaming, etc.

2. Redundancy. Are you just mirroring your drives like a RAID 1 setup? Are you using server quality drives?

Right now I'm using just desktop grade SATA drives. Mostly Western Digital 1TB and 1.5TB units.

I decided a while ago that I wasn't going to go the RAID route, as it was important for me to be able to remove any hard drive and access in another workstation without a lot of fuss.

I've also been bitten more times than I care to remember by RAIDs and while they offer added safety, they also create additional failure points and risks. In the end, I also don't have any data that really matters. If I lose a drive of media, so be it. All my important data I just script and backup to a spare drive instead of RAIDing – that way I also get a bit of an archive as well in case I modify or delete an important file.

For me, the only reason I would RAID with the EMS would be for performance. However, since one drive is capable of maxing out the gigabit network connection, real-world performance would be limited for my type of usage.

3. What type of network backbone are you using to stream to your front ends? I’m thinking that I need 6E or fiber to handle the multiple 1080p feeds. Could a 10Gig controller be installed on the system? Or should I use a board with two Gig controllers and use them in combination? (I really know nothing about this stuff.)

In my case, I’m technically not streaming the content. I am more accurately storing the media files on the EMS and then playing them back via a dedicated Windows 7 Media PC. This Media PC just accesses the media files via a share on the EMS to play back the media on demand.

I’m currently using an ATI 46xx Series Card on the Media PC that supports HDMI output as well. You could go with a higher end card, but this was a good balance for me.

I typically use VLC (http://www.videolan.org/) for the player on the Media PC as it supports the greatest range of codecs /media files with the least amount of fuss.

I have been able to play up to 24 movies of mixed DVD/BR type quality without any issues using a single gigabit network adapter in the server and a gigabit network adapter in the Media PC. In a home environment, I don’t think you’ll see the benefit of using 10Gb or dual 1Gb adapters. Possibly, if you move a lot of data to and from your main workstation while playing media concurrently, having a second 1Gb adapter on the server might came in handy???

As an aside, I also use the HDMI Matrix 4x4 product from Gefen (http://www.gefen.com/kvm/dproduct.jsp?prod_id=5307) to distribute my media content in the home to our 4 TVs.

In my setup, I have 3 HD-DVRs from DirecTV and 1 Media PC. All are setup to output to HDMI 1080i or 1080p and are plugged directly into the Gefen 4x4 Matrix, which in turn allows you to view any HDMI source on any of the 4 TVs via HDMI (think a basic switcher device), or view any of the 4 sources on multiple TVs simultaneously as well (i.e. a HDMI Distribution Device).

Because I have my equipment centralized and I want digital end-to-end, I take the HDMI output on the 4x4 Matrix and go to a HDMI Fiber Extender for the “last mile” (http://www.gefen.com/kvm/dproduct.jsp?prod_id=7986). These guys are expensive, but they just work. I tried the CAT5e HDMI Extenders and had nothing but problems. They are very sensitive to grounding noise and IMHO just not worth the effort – you’ll pay more (a lot more) for the fiber extensions, but they just work and work well.

4. Future proofing. Would you suggest waiting for a SATA 3.0 server grade motherboard?

I wouldn’t - the only item your Motherboard’s Integrated SATA Controller will be connected to is the boot drive for your server, and there currently just aren’t any single drives that can max out the “old “ SATA limit of 3Gb/sec. You’d need to get a single drive that can move more than ~375MB/sec to see any benefit and they don’t exist (yet!). Although a good Solid State Drive plays around the 200MB/sec mark…

I also think the spec/products are a bit too new at this point. When possible I like to wait for revision #2 of the products before I put it in a server that I just want for the most part to just work.

5. Where can I buy the BackBlaze Custom 4U Server Pod Case?

I ordered my case from:

Protocase Inc.
Phone: 1-866-849-3911 (Toll Free)
Fax: 1-902-567-3336
http://www.protocase.com/

I ended up with possibly a slightly older version of the case (REV7?). They offered to upgrade it after noticing the mistake – something about a new design that had flames instead of the standard fan grill type pattern at the back edge. I looked and it wasn’t a big deal for me.

By the way, these guys were great to work with!

6. Would it make sense to separately RAID 1 the O/S for redundancy? I’m thinking that it’s only important in WHS as it will maintain the integrity of the shares.

I would definitely consider it. Mounting the second drive would be a pain, but definitely possible. I also almost always do a simple Windows Mirror RAID for my boot drives – can’t tell you how many times that setup has saved me a ton of time. You also get better read performance, which might come in handy if you’re doing other stuff with the server.

With that said, I currently don’t have it in my setup, but I might add it. My only worry might be the additional power draw. In a perfect world, maybe 2 SSDs?

7. Would it be insane to consider SSD drives? At least for the O/S?

I’m currently using a single Intel SSD for the boot drive. I just really like the speed of the better units and have gotten spoiled. I also really like the lack of heat production, which is always nice. I personally only use Intel SSDs right now and, when the budget allows, their Extreme Edition in a Strip Array can’t be beat for raw performance in both reading and writing. However, their mainstream version of the drive would be perfect for the EMS and if you wanted a performance boost and extra protection, just do a simple RAID Mirror with two of them.

You could also use some SSDs in the PM Bays as well if you wanted, but just be careful to spread them out as 2 really good SSDs would max out a 5 Port PM Backplane’s Bandwidth which is limited to max throughput of ~375MB/sec.

Tuesday, December 1, 2009

Ryan Banham's Storage Server

Ryan Banham was kind enough to send me pictures of his mostly completed storage server. He is currently setting up the hardware RAIDs and doing stress/design testing.












Also, here are some tips/notes from Ryan that might be helpful...

1 - I realized just now that mine doesn’t really like to be moved. It acts up when moved and the RAID cards have to be reseated. This may have to do with the bottoms of the cards sticking out the case and the general flexibility of a fully loaded case. It may be a good idea to build in place instead of build and move the pod.

2 – With reference fan setup cooling is more than adequate. Surveying the drive temp status via SMART shows they are under ideal thermal range with an input temperature of 72F. The same for the main board environment too. It runs under heavy load about 15 degrees lower than its max temp. The CPU under full load for days will never get outside of 25-30 degrees below throttling temp. Due to this I went with a quieter lower power intake fan set. The drives stay under load about 25 C and everything else is well under thermal limits too.

3 – With only one power supply there is a gaping hole in the back of the case. I have built a plexi cover that I will install soon to keep crude large objects out of the case like a hand or tool. I have thought about putting an exhaust on it. Say a 80 or 90mm PWM fan powered from our board.

Wednesday, November 18, 2009

Streaming Media - Extreme Style!

I thought it would be fun to see how many movies I could stream/play on my PC using my Big Red Server as the source. I stopped at 16 or 8 per screen on my dual screen setup.



Even with all 16 movies going though, still plenty of bandwidth/resources left on the server.

It was interesting hearing all 16 movies at once!

I'm sure you can do this in other OSes, but the 'Show Windows Side by Side' option in Windows 7 is nice...

Also, here's an updated picture of EMS as of today:




Don

Sunday, October 18, 2009

Wiring up the PMs for Power

Let’s delve into wiring the power for the Port Multipliers. This is probably one of the most important areas to get right or you’ll be chasing intermittent issues and you’ll have to rip apart the server which is time consuming.

In my case, I went with just a single 1200W Thermaltake Power Supply. I did this mainly because I hate cramped cases and 2 power supplies will make the case cramped and difficult to access components on the motherboard. The second reason and probably more important reason is that I already had a 1200W PS lying around!

If you do go the two Power Supply route instead, I’d recommend a 750W or larger power supply to power the Drive/PM Section. On initial spin up, these drives pull a lot of power.

Also, just make sure you get a good quality power supply – this is the one area where you want to get the best quality possible.

So in my design, the 1200W Thermaltake Power Supply has 4 main power rails, 3 of which we can tap into for power. The first power rail (black modular connectors on the power supply) is your standard peripheral power rail which consists of 4 Modular Connectors that support standard 5V/12V devices. We’ll be using this power rail to provide 5V power to the port multipliers (3 PMs per Power Supply Modular Connector) and the 12V lines will provide power for the case fans as needed.

The second power rail which we don’t have real access to is for the motherboard.

The 3rd and 4th power rails (red module connectors on the power supply) are designed to support high-end graphic cards or power hungry PCI-E cards and only supply 12V (no 5V taps available), but we have a total of 6 module connectors which is over 20 actual 12V dedicated yellow/12V wires – more than enough for a dedicated 12V run for each PM.




One of the nice things about the PCI-E Power Connectors is that they can support up to 72A at 12V (36A per RAIL). These will be perfect for providing the 12V power to the port multipliers. Here’s a chart of the max power available on each rail.



For the actual wiring on the PM side, I took standard 90 degree Molex Y connector type cable (http://www.svc.com/fc444-28.html) and cut it. Please note that you will need the 90 degree Molex connectors as the standard Molex power connectors are too large and the port multipliers won’t fit into the case if you use them.

I then took some red, yellow and black 16AWG stranded core wire and started making my cable assemblies, being careful to bundle the 5V Ground Line with the 5V/Red Cable and the 12V Ground Line with the 12V/Yellow Cable. I’m not sure if it is okay to cross the ground lines across the rails, but better safe than sorry.

When you wire up the actual PM, what you DO NOT want to do is something like this:

16AWG PS Line -> 20AWG PWR CONN on PM -> ... -> 20AWG PWR CONN

Instead you want something like this:

16AWG PS Line -> 20AWG PWR CONN ON PM
              -> 20AWG PWR CONN ON PM






The reason the second design works better is because most of the Y cables that you'll purchase will use 20AWG or even thinner wire. By having a nice thick 16AWG wire feed from the power supply, we allow the PMs to tap into a "16AWG Power Bus" versus tapping into a "20AWG Power Bus". The wire thickness may not seem like much, but in my experience it makes the difference between having 5 drives on a PM BackPlane that power up and 5 drives that don't.

When hooking up to the actual power supply, here's the basic layout of connections:

Rail 1 – Black PS Plug 1 -> 5V -> PMs 1 to 3
Rail 1 – Black PS Plug 2 -> 5V -> PMs 4 to 6
Rail 1 – Black PS Plug 3 -> 5V -> PMs 7 to 9
Rail 1 – Black PS Plug x -> 12V -> Fans in System


Rail 3 – Red PS Plug 1 -> 12V Wire 1 -> PM1
Rail 3 – Red PS Plug 1 -> 12v Wire 2 -> PM2
Rail 3 – Red PS Plug 1 -> 12v Wire 3 -> PM3


Rail 3 – Red PS Plug 2 -> 12v Wire 1 -> PM4
Rail 3 – Red PS Plug 2 -> 12v Wire 2 -> PM5


Rail 4 – Red PS Plug 5 -> 12V Wire 1 -> PM6
Rail 4 – Red PS Plug 5 -> 12V Wire 2 -> PM7
Rail 4 – Red PS Plug 5 -> 12V Wire 3 -> PM8


Rail 4 – Red PS Plug 6 -> 12v Wire 1 -> PM9





As for the actual wiring of the PMs, I crimped 4 lengths of ~30” 16AWG wire (1 yellow, 1 red, and 2 blacks) to the PM “Y-Connectors”. I then cut the modular ends off of the PCI-E/PS connectors that would typically go to the video cards and peripherals, and then crimped these wires to them per the layout above. Any unused wires were just taped off.

For all the actual crimps, I just use standard AMP style crimp connectors. Done with the right crimp tool and technique, I'd take these any day over soldering wire.




If you haven’t done it already, time to get your hands dirty!!

Wednesday, October 14, 2009

Pictures - Alpha Build

Here are some pictures of the Alpha Build. I'm getting a lot of questions in e-mail so I thought this might help those that just can't wait - and you know who you are :)!


EMS Alpha Build - Full Case




EMS Alpha Build - Just Drives



Server Side Drive Listing





Drive List - Client Side via Share





Power Usage on Initial Startup
(Startup/Surge Mode)




Power Usage - Standard "Run Mode"
(Post-Surge)




Power Usage with Drive Power Management Enabled





And finally, this is totally unrelated to the EMS, but is a picture of the "hard drive wall" which is just kind of cool! If you get really close you can see the 1s and 0s!!!