Skip to Main Content
March 05, 2010

Building one impressive CUDA Cracking Server

Written by David Kennedy
Penetration Testing Security Testing & Analysis
Well, I decided to embark on a mission to create a fun and exciting CUDA cracking server. This wasn't the first time I've decided to make one of these however, I wanted to go bigger and better this time around. After a bad hard-drive, bad motherboard, bad powersupply, and a completely smashed and destroyed DVD burner that arrived via UPS, everything is up and running, but not without some additional hurdles. First I want to send a very special thanks to Pure_Hate (Martin) for pretty much helping me along on each step going through this, without him it would have taken me forever to put this thing together. Pure_Hate is probably one of the most knowledge people that I know in the CUDA field, he pretty much built the entire spec for this machine. Thanks buddy :-) Be sure to check out his article on getting CUDA working on Back|Track 4. Another thing, thanks to Josh Kelley (winfang) for all of the help and effort plus his insane dremel skeelz. Alright, lets first start off with what is the purpose of this machine and why build something like this. If you haven't heard about the ability to utilize GPU processing power to crunch mathematical equations and such, your missing out. The current setup utilizes 4 BFG Nvidia GTX 295's in a manner that crunches about 120,000 PMK's per second via WPA/WPA2 captures, on a normal processor you will do around 100 PMK's per second. Additionally, just utilizing one GTX 295 and multiforcer, the machine was cracking around 2300 million attempts per second (took 15 seconds for a 7 character password). Talking with BitWeasil he is looking to add multi-GPU support shortly, so think 2300 million * 4 = 9200 million attempts per second. Next lets take a peek at the hardware specs used in this system, overall I believe the total cost was around 6K, this can easily be scaled back quite a bit by not purchasing the highest end quad core i7, less disk space, less ram, and not as many GTX 295's (but you should get 4 hehe). <---- Need 4 of these <--- need 10 of these <-- need 2 of these<-- need 2 of these <-- need 4 of these <-- need 10 of these Talking through the hardware, what was purchased mostly was a quad core i7 975 extreme edition processor (highest end on the market), 12 gigs of ddr3 corsair RAM, 4 laptop 500GB hard-drives. One thing to mention is that two power supplies were needed, a 1250W power supply and a 750W power supply. What I did was power the motherboard, fans, hard-drives and 2 gtx 295's off of the 1250W power supply, and place two GTX 295's on the 750W power supplies. You may want to consider a different 750W power supply that is more modular, one thing I didn't like about the corsair is that all of the power plugs are already attached and can't be removed. If you got the power and money, the 1250W power supply was suppper nice. I ended up purchasing an additional 900W black widow that was modular because the 750W ended up being bad. Lastly the Nvidia cards are 4 BFG Nvidia GTX 295's which are very top notch video cards. Supposedly there is new Fermi ones coming out that may be much faster later on however, you can always update, it will be PCIe 2.0 compliant. The motherboard is an interesting one, its an ASRock and one of the only ones that is specifically designed to handle the double width PCIe slots that the GTX 295's take. The motherboard out of the box holds four of them which is VERY nice. Check out the screenshot below: ASRock motherboard Moving on, building the case was interesting with all of the hardware failures we had, but lets skip the turmoil we had with all of that. There are certain modifications that needed to be made to the 4U server case in order to fit all 4 of the GTX 295's. If you look at the back of the case, the very first GTX 295 (closest to the power supply) vent gets cut off halfway. What we ended up doing is using a dremel to saw through the metal and allow the full space there to be exposed for us. We also had to remove the motherboard backplate move it very snug and right up to the 1250W power supply in order to ensure all 4 GTX 295's fit properly. This means that the PS2 input's in the back will no longer be exposed in the back, which isn't a big deal just use USB mouse/keyboards. We ended up using double sided tape from home depot (really sticky stuff) to mount the motherboard backplate to the 4U server rack metal. You can easily drill holes to make your own in order to fit it if you need it. We mounted the second power supply right above the 1250W near the front of the box, and used double sided tape there as well. Talking with pure_hate, we decided the air flow would be sucked in from the front of the server, push it through the case and spit out the hot air in the back. We also have two fans on the top of the case pushing direct cool air on top of the GTX 295 intakes and spits out the hot air in the back of the case. Overall the cards and CPU seem to be running relatively less hot and are well within normal ranges. We used the standard mount that came with the server for the hard-drive mounts towards the front of the case, and mounted them sideways in order to keep them in place. We decided not to put in the DVD burner as there wasn't a ton of space left, and plus you can just boot from USB anyways. Cables were partially tucked underneath the motherboard backplate (there is enough room to tuck a lot of stuff). Overall here is the finished pictures (more to come soon) GPU1 and: GPU2 Now that we have the hardware configuration setup, I ran into a lot of other issues with getting the software playing nicely with the configurations. The first thing you will want to do is fix your Nvidia stuff right off the bat. When I powered on the machine, it would start off on one of the GTX 295's and as soon as X started, would move to a different card. I ended up doing the majority of my stuff through a shell anyways. The platform I decided on was Ubuntu 9.10 x64 bit, I'm a big fan of Back|Track 4 but unfortunately its not 64 bit and wouldn't recognize all my RAM and would have page file size limitations. So when you first power on your system, there is a couple of things you need to download:   <--- Linux 64 bit Nvidia drivers <-- CUDA SDK <--- CUDA SDK samples Kill X and make sure your running in a normal shell (/etc/init.d/gdm stop or control-alt backspace) and type: chmod +x and do that for the cudatoolkit and cudasdk. Run the application ./ and follow the instructions. When it asks you if you want to auto-generate your own configuration file for xorg, say NO! Once your out of there, you will need to run a special command (and an undocumented feature that took me 5 1/2 hours to find). At the shell running as root type: nvidia-xconfig --enable-all-gpus What this does is create the xorg.conf configuration file with the necessary BusID information in order to enable all of the devices, for me it auto-defaulted the display to the first GTX 295 card in both pre-boot, boot, and when X is started. I can't tell you how long it took me in just finding the "--enable-all-gpus" flag within nvidia-xconfig. One positive note is I can write my own xorg.conf from scratch now. Now that you have all of that running, you can install the SDK (default path is /usr/local/cuda) and sample tools now. When installing the sample programs, make sure to point it in the directory of your SDK path so for example /usr/local/cuda/NVIDIA_CUDA_SDK. That's it! Hope you enjoyed, was a blast building this, be careful to not plug this unit into a UPS type deal, it ended up blowing the circuit breaker on the UPS when pyrit started calibrating the GTX 295's :-)