I just have to plug two of my favorite products here. They are absolutely great at what they do and make my life a LOT easier.
Confluence: This is an enterprise-grade Wiki for storing, categorizing, and searching documentation as well as collaboration. It is EASY to deploy, manage, and upgrade. Take a test drive of it.
Jira: This is a workflow tool allowing you to enter tasks & issues, track progress, and time spent on those. Our development team uses it for issue tracking and other things, but it is absolutely GREAT just like Confluence.
Friday, December 3, 2010
Wednesday, October 27, 2010
Google Enterprise Services
FYI: Do not buy ANY Google Enterprise Services. When you call for support, you get punted to their support forums, and the call handler will drop you after stating there is nothing they can do for you.
I was simply trying to ask how do I go about purchasing MORE seats for Postini, but with that attitude, and case-ownership attitude, I believe I will spend my company's money elsewhere.
I was simply trying to ask how do I go about purchasing MORE seats for Postini, but with that attitude, and case-ownership attitude, I believe I will spend my company's money elsewhere.
Sunday, October 10, 2010
Arista Networks Host-Link Aggregation
Setup your server side (I use Broadcom Advanced Control Suite) as you normally would.
On your new shiny Arista 7120T, this is what you need to do:
7120t enable
config t
int et1-4
channel-group 1 mode on
Then run: show int po1
Port-Channel1 is up, line protocol is up
Hardware is Port-Channel, address is xxxx.xxxx.xxxx
MTU is 9212 bytes, BW 40000000 Kbit
Full-Duplex, 40Gb/s
Active Members in this channel: 4
...Ethernet1 , Full-duplex, 10Gb/s
...Ethernet2 , Full-duplex, 10Gb/s
...Ethernet3 , Full-duplex, 10Gb/s
...Ethernet4 , Full-duplex, 10Gb/s
5 minutes input rate 0 bits/sec (0%), 0 packets/sec
5 minutes output rate 0 bits/sec (0%), 0 packets/sec
0 packets input, 0 bytes
Received 0 broadcasts, 0 multicast
0 input errors
0 packets output, 0 bytes
Sent 0 broadcasts
Now you're cooking with gas :)
For switch to switch LACP:
7124-SWB(config)#int et1-4
7124-SWB(config-if-Et1-4)#channel-group 1 mode passive
7148-SWA(config)#int et1-4
7148-SWA(config-if-Et1-4)#channel-group 1 mode active
Once the above sequence is entered and synchronization is complete, the ethernet interfaces 1-4 on each switch are bound to the port-channel.
If you're using another switch with your Arista, the config might be a bit different and you may need to try a few different configs to get it to work correctly.
On your new shiny Arista 7120T, this is what you need to do:
7120t enable
config t
int et1-4
channel-group 1 mode on
Then run: show int po1
Port-Channel1 is up, line protocol is up
Hardware is Port-Channel, address is xxxx.xxxx.xxxx
MTU is 9212 bytes, BW 40000000 Kbit
Full-Duplex, 40Gb/s
Active Members in this channel: 4
...Ethernet1 , Full-duplex, 10Gb/s
...Ethernet2 , Full-duplex, 10Gb/s
...Ethernet3 , Full-duplex, 10Gb/s
...Ethernet4 , Full-duplex, 10Gb/s
5 minutes input rate 0 bits/sec (0%), 0 packets/sec
5 minutes output rate 0 bits/sec (0%), 0 packets/sec
0 packets input, 0 bytes
Received 0 broadcasts, 0 multicast
0 input errors
0 packets output, 0 bytes
Sent 0 broadcasts
Now you're cooking with gas :)
For switch to switch LACP:
7124-SWB(config)#int et1-4
7124-SWB(config-if-Et1-4)#channel-group 1 mode passive
7148-SWA(config)#int et1-4
7148-SWA(config-if-Et1-4)#channel-group 1 mode active
Once the above sequence is entered and synchronization is complete, the ethernet interfaces 1-4 on each switch are bound to the port-channel.
If you're using another switch with your Arista, the config might be a bit different and you may need to try a few different configs to get it to work correctly.
Friday, October 8, 2010
Interesting, OWC REALLY enters the fray now.
I can't wait to see Larry release THIS upon the world. My Oracle servers are already drooling over this as they watch me type this over my shoulders. Bring them on!
Monday, October 4, 2010
OWC SQLIO Testing
I just finished the SQLIO testing of the OWC Mercury Extreme Pro RE SSD's.
Platform Details: Dell PowerEdge 6950, 4x Dual-Core AMD Opteron 2.6ghz CPU's, 32GB RAM, Dell Perc 5/i RAID Adapter. RAID details: RAID10, 1MB Stripe Size, Adaptive Read Ahead, Writeback Cache, Windows Server 2008 R2 Standard. I used 4x of the 200GB version of the drive in RAID10 and installed WS2008R2 on that array.
The results were about what I was expecting. See below.
The full details are available here.
Platform Details: Dell PowerEdge 6950, 4x Dual-Core AMD Opteron 2.6ghz CPU's, 32GB RAM, Dell Perc 5/i RAID Adapter. RAID details: RAID10, 1MB Stripe Size, Adaptive Read Ahead, Writeback Cache, Windows Server 2008 R2 Standard. I used 4x of the 200GB version of the drive in RAID10 and installed WS2008R2 on that array.
The results were about what I was expecting. See below.
The full details are available here.
Thursday, September 30, 2010
OWC SSD Preliminary Testing
Just got in our large order of the OWC Mercury Extreme Raid Edition SSD's (200gb size). While I am waiting on the R910's they are destined for, I threw a few into an older Dell PowerEdge 6950 with quad Dual-Core AMD Opteron 2.6ghz CPU's. I setup a single boot drive and a 4-disk RAID10 group. The RAID settings were set to 1MB stripe size, adaptive read-ahead, and writeback. Went through a normal Windows 2008 R2 Standard installation to run my IOMeter and (soon) SQLIO tests.
The results were pretty good.
Using 4MB IO sizes, they REALLY shined bursting over 2800 MB/s of thruput.
The details of the test were: 4MB IO size, 50% read, 50% write, 50% random and 50% sequential. The sustained average was a more-sane 1100 MB/s with the same settings.
Using 4k IO's, I saw over 600 MB/s, which is about what I expected, and the IOPs were good at >50,000.
Not bad for ~$2400 of storage, with a full 5-year replacement warranty.
Stay tuned for further, more detailed testing including SQLIO. I know you SQL Server guys like to see those. I'm running Oracle 11g R2 Enterprise on these 4 right now as a sandbox testing environment. Using these SSD's with Oracle parallel execution has made a GIANT impact on our application performance so far. Keep in mind, we're not a production shop, and no important data was at risk. This is just a "testing the waters" scenario before we move further.
The results were pretty good.
Using 4MB IO sizes, they REALLY shined bursting over 2800 MB/s of thruput.
The details of the test were: 4MB IO size, 50% read, 50% write, 50% random and 50% sequential. The sustained average was a more-sane 1100 MB/s with the same settings.
Using 4k IO's, I saw over 600 MB/s, which is about what I expected, and the IOPs were good at >50,000.
Not bad for ~$2400 of storage, with a full 5-year replacement warranty.
Stay tuned for further, more detailed testing including SQLIO. I know you SQL Server guys like to see those. I'm running Oracle 11g R2 Enterprise on these 4 right now as a sandbox testing environment. Using these SSD's with Oracle parallel execution has made a GIANT impact on our application performance so far. Keep in mind, we're not a production shop, and no important data was at risk. This is just a "testing the waters" scenario before we move further.
Wednesday, September 15, 2010
10GBase-T is here
You read that right, 10-Gigabit Ethernet over CAT5E or CAT6A, with distance limitations of course, and good cabling.
This is the Arista Networks 7120T-4S 10GBase-T switch. I am anxious to get this installed and tested with the new servers coming in multiple Broadcom 57710 NICs. Hyper-V and ESX should run nicely with 4x 10GBase-T in LACP per physical host.
Now for the pics:
This is the Arista Networks 7120T-4S 10GBase-T switch. I am anxious to get this installed and tested with the new servers coming in multiple Broadcom 57710 NICs. Hyper-V and ESX should run nicely with 4x 10GBase-T in LACP per physical host.
Now for the pics:
Sunday, September 5, 2010
OWC SSD in Macbook Pros
I received the two 13" MacBook Pro's and their OWC SSD's yesterday:
Installation was a snap taking a couple of minutes each:
After installing Mac OS X 10.6, I saw just how fast these things are. From the "dong" to logged in was only a matter of seconds, maybe 10 seconds from the power-on button.
I setup Bootcamp with Windows 7 X64 Ultimate because the two end-users requested this. After setting that up and installing the appropriate Bootcamp drivers, I updated them and rebooted.
I tried IOMeter and saw good results, right at what was advertised. Being my curious self, I bumped the IO Size to 4MB and was surprised to see the performance even higher than advertised at 339MBs:
Now I am looking forward to seeing what 12 of those in RAID10 and RAID0 will do. Can't beat that for the price.
Installation was a snap taking a couple of minutes each:
After installing Mac OS X 10.6, I saw just how fast these things are. From the "dong" to logged in was only a matter of seconds, maybe 10 seconds from the power-on button.
I setup Bootcamp with Windows 7 X64 Ultimate because the two end-users requested this. After setting that up and installing the appropriate Bootcamp drivers, I updated them and rebooted.
I tried IOMeter and saw good results, right at what was advertised. Being my curious self, I bumped the IO Size to 4MB and was surprised to see the performance even higher than advertised at 339MBs:
Now I am looking forward to seeing what 12 of those in RAID10 and RAID0 will do. Can't beat that for the price.
Tuesday, August 31, 2010
Oracle Bigfile Tablespaces
After many years of ignoring Bigfile Tablespaces, I finally devoted some time to investigating them. I've been missing out. I've always hated building a "shell" database instance, then running my script to build my normal small file tablespaces, and add 10+ 8GB datafiles to each of my six main tablespaces. This is a LONG running script because the disk IO is not very fast in my existing SAN storage. This will not be an issue anymore in the near future, thanks to the folks at OWC. Still, managing 20-40 datafiles per tablespace is just plain aggravating. Now, I have one datafile per tablespace, that grows by 1GB in size as needed up to the filesystem limitation I am using, which is 32TB. I do not plan on outgrowing that anytime in the near future. Sure, it makes Rman recovery slower when recovering a single 300+gb datafile, but again, not on SSD. I don't use rman to begin with. Datapump thru bash shell scripts with Crontab is my backup of choice for my Linux/Oracle servers. Bigfile tablespaces will be going in during my upcoming server/storage refresh for my main tablespaces.
OWC Mercury Extreme Pro SSD's
The verdict is in. We're going with the OWC RE Pro SSD's in our Hyper-V and Oracle DB servers. Why, you ask? Well, the Raid Edition comes with a full 5-year warranty and 28% over-provisioning. My testing of 3x Corsair F240's convinced me the Sandforce-based SSD's are Enterprise-class, and they have the performance I want. Corsair doesn't have the 5-year warranty of the OWC product. OWC put it in writing that if I happen to "write-out" the drives before the warranty is up, they will replace them under warranty. That's standing by your product. I only plan to use them for 3 years because that's our leasing cycle. They'll be replaced in 3 years, so I'm not worrying about the write-cycle lifetime in RAID10. We're also putting them in all developer workstations and notebooks from here on out.
Wednesday, August 11, 2010
Direct I/O vs. Cached I/O
The simple difference between Direct I/O and Cached I/O:
Direct I/O :
Cached I/O :
Keep in mind, all of your data won't be cached all of the time. Only a small amount will be, so you won't get that high performance 100% of the time. If you want that, go with SSD.
Direct I/O :
Cached I/O :
Keep in mind, all of your data won't be cached all of the time. Only a small amount will be, so you won't get that high performance 100% of the time. If you want that, go with SSD.
Tuesday, August 10, 2010
Corsair F240 SSD
I received them yesterday. I installed one in my Mac Pro and copied my Win7 VM to it.
Just finished running ATTO and this was the result:
To make formatting easier, here are the SQLIO results as a linked image:
MB/S
IOPS
Remember, this is a SINGLE drive in my Mac Pro, running a Windows 7 Ultimate X64 VMWare Fusion VM on it. The VM has 4GB of RAM and 4 CPU cores allocated to it. These tests were ran from within that VM. I know it wasn't "optimal" conditions, but the results look promising given the scenario. Not bad for a single $600 drive.
This is 3 of the F240's in RAID0:
Now that makes a bit more sense. Not bad at all.
Of course with NAND, TRIM doesn't work with RAID, so the life of the drive may be shortened somewhat until support for TRIM with RAID surfaces.
Just finished running ATTO and this was the result:
To make formatting easier, here are the SQLIO results as a linked image:
MB/S
IOPS
Remember, this is a SINGLE drive in my Mac Pro, running a Windows 7 Ultimate X64 VMWare Fusion VM on it. The VM has 4GB of RAM and 4 CPU cores allocated to it. These tests were ran from within that VM. I know it wasn't "optimal" conditions, but the results look promising given the scenario. Not bad for a single $600 drive.
This is 3 of the F240's in RAID0:
Now that makes a bit more sense. Not bad at all.
Of course with NAND, TRIM doesn't work with RAID, so the life of the drive may be shortened somewhat until support for TRIM with RAID surfaces.
Friday, August 6, 2010
Oracle ASM
What a gigantic HEADACHE. I can see how it can be useful from an administrative standpoint and how it is tailored for RAC, but I am skeptical of the supposed "performance" increase. If it slightly increases performance, is it worth the headache it causes from a configuration and maintenance standpoint? To me, absolutely NOT. It takes a DAY or more to set it up and configure it correctly. This is BEFORE the DB software is installed/configured, and the database being built. Using regular SAN LUNS, I can partition, mount, and be ready to build the database in ~30 minutes. ASM is not worth the headache to me.
Why would I want a SECOND Oracle instance just to manage my DB storage disks? That's more CPU and memory being eaten up just for storage purposes.
Why would I want a SECOND Oracle instance just to manage my DB storage disks? That's more CPU and memory being eaten up just for storage purposes.
Thursday, August 5, 2010
New SSD's ordered
Ordered 3 of the the Corsair 240GB Sandforce-based SSD's this morning for some testing. I'll be posting the results when I get them and finish the testing.
Wednesday, August 4, 2010
OCZ Z-drive R2 Failure
It went out and took an Oracle DB with it. Luckily, it was a non-important copy of a schema we test with.
Now that I really think about it, it's really STUPID to sell a storage product labeled "enterprise ready" that only supports RAID0. Why? Choose a RAID controller that can support RAID10, double the NAND cards onboard, build an 8-way RAID10 array on the card and jack the price up accordingly. That's how it works now, but only in RAID0. There are 8 "banks" of 2 NAND modules each. Each of those 8 banks appears as a physical disk to the LSI RAID controller. One whole bank died and took the entire DB with it, seeing how it was RAID0. Strike one. I poke around in the RAID BIOS, see it only supports RAID0 and RAID1, and physical disk 6 is missing. Strike 2. There will be no chance for a strike 3. The great rep I have @Dell understood completely and setup the RMA for a full refund.
Now it's time to investigate the OWC/Corsair Sandforce-based SSD disks. Maybe 12 of 'em in RAID10 would give the IOPS and raw performance at the lower cost that we crave. I like the Fusion IO product, but I am skittish of placing multiple DB's on one PCI-E card now. Maybe 12 disks in RAID10 would let me sleep easier and not hurt the budget as much as multiple Fusion IO cards per server. I've already figured that I can put 12x ~200GB Sandforce-based SSD's in 4 servers, 5 each in 3 other servers, 4 each in 2 servers for less money than 3 Fusion IO IODrive Duo cards in the forthcoming 1.2TB size. Don't get me wrong, the Fusion IO product is GREAT and ideal, but my small-medium sized business can't justify the extra cost of the Fusion IO product. I can minimize the single-point of failure by using 12 physical SSD's in RAID10. I have never had a single SaS/SATA controller failure in the past ~12 years. I know it happens, but not to me, yet. That part will be covered under a 24x7 4-hour hardware replacement agreement with Dell. I'm not worrying with it too much.
I plan to get four of them to test in a RAID10 array soon. Not sure on which yet, Corsair or OWC. I might try both. I'll be sure to do a detailed writeup and benchmark post if it happens. Four disks should give a good indication of the performance scalability of the drives.
On another note, I'm planning on putting 2 of the Corsair 120GB Sandforce-based SSD's in my personal 27" iMac in RAID0 in the near future. I want 200gb+ and the performance of 2 disks. Costs about the same as one 240GB one, but with higher performance. I know, 2 in an iMac? How? Well, replacing the stock 1TB disk, and pulling the internal Superdrive out, using this to install the 2nd SSD in the Superdrive bay. Then, putting the Superdrive in an external USB/Firewire enclosure.
Now that I really think about it, it's really STUPID to sell a storage product labeled "enterprise ready" that only supports RAID0. Why? Choose a RAID controller that can support RAID10, double the NAND cards onboard, build an 8-way RAID10 array on the card and jack the price up accordingly. That's how it works now, but only in RAID0. There are 8 "banks" of 2 NAND modules each. Each of those 8 banks appears as a physical disk to the LSI RAID controller. One whole bank died and took the entire DB with it, seeing how it was RAID0. Strike one. I poke around in the RAID BIOS, see it only supports RAID0 and RAID1, and physical disk 6 is missing. Strike 2. There will be no chance for a strike 3. The great rep I have @Dell understood completely and setup the RMA for a full refund.
Now it's time to investigate the OWC/Corsair Sandforce-based SSD disks. Maybe 12 of 'em in RAID10 would give the IOPS and raw performance at the lower cost that we crave. I like the Fusion IO product, but I am skittish of placing multiple DB's on one PCI-E card now. Maybe 12 disks in RAID10 would let me sleep easier and not hurt the budget as much as multiple Fusion IO cards per server. I've already figured that I can put 12x ~200GB Sandforce-based SSD's in 4 servers, 5 each in 3 other servers, 4 each in 2 servers for less money than 3 Fusion IO IODrive Duo cards in the forthcoming 1.2TB size. Don't get me wrong, the Fusion IO product is GREAT and ideal, but my small-medium sized business can't justify the extra cost of the Fusion IO product. I can minimize the single-point of failure by using 12 physical SSD's in RAID10. I have never had a single SaS/SATA controller failure in the past ~12 years. I know it happens, but not to me, yet. That part will be covered under a 24x7 4-hour hardware replacement agreement with Dell. I'm not worrying with it too much.
I plan to get four of them to test in a RAID10 array soon. Not sure on which yet, Corsair or OWC. I might try both. I'll be sure to do a detailed writeup and benchmark post if it happens. Four disks should give a good indication of the performance scalability of the drives.
On another note, I'm planning on putting 2 of the Corsair 120GB Sandforce-based SSD's in my personal 27" iMac in RAID0 in the near future. I want 200gb+ and the performance of 2 disks. Costs about the same as one 240GB one, but with higher performance. I know, 2 in an iMac? How? Well, replacing the stock 1TB disk, and pulling the internal Superdrive out, using this to install the 2nd SSD in the Superdrive bay. Then, putting the Superdrive in an external USB/Firewire enclosure.
Tuesday, August 3, 2010
Epic Failure: TheLadders resume service
I just had to write about this. It was absolutely hilarious. TheLadders have been pimping their resume-writing service and "evaluated" mine for free. I was skeptical, but thought "it's free, let's see what they have to say".
First, they warn they are blunt, some people get mad about their bluntness, etc. I'm already laughing at this point. They also said "I am concerned that your resume doesn’t effectively communicate your value as a senior-level candidate - the resume lacks the ability to capture an employer and make him/her say, “I must have this person in for an interview”."
Wow. What a load. After reading my resume, anyone with a BRAIN can see that I am a senior-level candidate. I am really laughing at this point.
Continuing:
" By relying on worn out resume clichés like "Solutions-oriented," “Hands-on experience” and "Outstanding leadership abilities," you're lumping yourself in with scores of other candidates. It’s not that these are bad attributes, but the fact of the matter is that's the kind of language 9 out of 10 people will use. That's also why only 1 out of 10 will get the interview."
Even funnier. WTH are you supposed to say in place of those statements without "candy-coating"? 9 out of 10 don't get an interview because the hiring company is either 1:) Fishing or 2.) Posting the job as a "formality" because the job will be filled by their cousin or brother-in-law next week. What a joke. That one person may get an interview if the company feels they may get them at a large discount.
Also:
"you have some very impressive Technical skills. I would recommend that you move this section to the back of your resume. Remember, first person who sees your resume will be a junior reviewer - not your future bosses so don’t expect them to understand the significant of these skills."
This is hilarious. Put all of my TECHNICAL skills at the back of the resume. That's a good way to get your resume tossed into the shredder. No good recruiter will sit there and wade through a page or two of "FLUFF" to get to the meat & potatoes. I always believe you should show your chops first for foremost. Save them the time & frustration.
Overall, this whole thing is a JOKE in an attempt to swindle me out of $695.
First, they warn they are blunt, some people get mad about their bluntness, etc. I'm already laughing at this point. They also said "I am concerned that your resume doesn’t effectively communicate your value as a senior-level candidate - the resume lacks the ability to capture an employer and make him/her say, “I must have this person in for an interview”."
Wow. What a load. After reading my resume, anyone with a BRAIN can see that I am a senior-level candidate. I am really laughing at this point.
Continuing:
" By relying on worn out resume clichés like "Solutions-oriented," “Hands-on experience” and "Outstanding leadership abilities," you're lumping yourself in with scores of other candidates. It’s not that these are bad attributes, but the fact of the matter is that's the kind of language 9 out of 10 people will use. That's also why only 1 out of 10 will get the interview."
Even funnier. WTH are you supposed to say in place of those statements without "candy-coating"? 9 out of 10 don't get an interview because the hiring company is either 1:) Fishing or 2.) Posting the job as a "formality" because the job will be filled by their cousin or brother-in-law next week. What a joke. That one person may get an interview if the company feels they may get them at a large discount.
Also:
"you have some very impressive Technical skills. I would recommend that you move this section to the back of your resume. Remember, first person who sees your resume will be a junior reviewer - not your future bosses so don’t expect them to understand the significant of these skills."
This is hilarious. Put all of my TECHNICAL skills at the back of the resume. That's a good way to get your resume tossed into the shredder. No good recruiter will sit there and wade through a page or two of "FLUFF" to get to the meat & potatoes. I always believe you should show your chops first for foremost. Save them the time & frustration.
Overall, this whole thing is a JOKE in an attempt to swindle me out of $695.
Saturday, July 31, 2010
PCI-E SSD? Um, maybe, maybe not.
After further testing, the OCZ Z-Drive seems to choke when pushed to 8 & 16 threads. My future Oracle servers will be pushing 32 threads, and having my storage choking out is not an option. I know the RAID controller in the R910 can feed the CPU's easily. Now I am pondering on 14x OWC SSD's in RAID10, with the 2 remaining disk slots housing 146GB 15k SaS disks in Raid1 for the Linux OS and Oracle_Home. I'm wondering what kinda IOPS and performance 14 of those in RAID10 will attain. Maybe it's time to "test" that theory out. The OWC disks have a full 5-year warranty and use the new Sandforce controller. Hmmm..... Maybe handing those 14 disks to ASM would be best. Yes, it's time for more testing. OCZ is out of the picture now. It's a prosumer product purporting to be an enterprise product. Pushing past 2 simultaneous threads revealed that as truth in my eyes. A wise man once said "Truth is treason in the empire of lies."
Thursday, July 29, 2010
Interesting concept: Blocking entire continents.
Now this is a very interesting concept. If you do business only in the US, this could be useful in further securing your network. If you run your own Exchange server, this could even tremendously lower your incoming spam volume at your network point of access (firewall). If you host on your own webserver, this could easily further secure it. Face it, the vast majority of security breaches and spam originate from outside of the US, and if you don't communicate with people or do business with people outside of the US, why allow any connections from those countries?
I give you the link: http://www.countryipblocks.net/continents/
I may investigate this further for our own use.
I give you the link: http://www.countryipblocks.net/continents/
I may investigate this further for our own use.
Error: 0xC03A001D "The sector size of the physical disk on which the virtual disk resides is not supported."
Hyper-V, why art thou so persnickety? My storage performs best with 4k sector sizes, yet YOU demand 512byte sectors and upchuck this nastily huge error message and fail to boot my VM. For reference, this is only a ~300GB LUN.
Tuesday, July 27, 2010
SSD and Spindle Disks
Traditional spindle-based SAN will be going out to pasture in the HPC/Oracle/SQL/Virtualization market segments within the next 5 years. It only takes two $400 SSD's to totally decimate 15x SATA disks costing ~$20k. The big storage vendors would be well advised to "get with the program" and soon. Guys like Texas Memory Systems, Violin, Whiptail, and Fusion IO are exploding in those markets today and will even more as NAND prices drop. Buy'em before someone else does or they get too big to be acquired is what I would do if I were EMC or Netapp, or even Dell/HP/IBM. EMC's Enterprise Flash Disk (EFD) is a running joke to those "in the know" in the storage game.
To put this into perspective, today I tested a little svelte Sony VAIO VPC-Z1190X extremely portable laptop with an i7 Quad-core CPU, 8GB of RAM, with 2x 250gb factory SSD's in RAID0. Right out of the box, it was brutally fast. I setup a VMWare Player VM running XP Pro SP3 with 2x processors, and 4GB of RAM. 2 Virtual Disks of 30GB and 60GB. The 30GB disk housed the OS and Oracle 10g R2 installation with our applications. The second disk housed the actual database files. I imported one of our QA schemas into the local instance. This local Oracle instance was allocated 2GB of physical RAM. I setup our applications as usual on the same primary disk.
I ran a few processes in our apps utilizing the local Oracle instance and it ran circles around our QA database server. I shook my head in amazement. A VM running Oracle outperformed our PRIMARY QA database server by a LONG shot.
Our primary QA database server's specs:
Dell PowerEdge 1955 Blade, 2x 3.2GHZ Xeon CPU's, 8GB of RAM, 2x local 300GB 10k SCSI disks.
The first disk houses the OS Installation (Windows Server 2003 R2 STD, 32bit) and the pagefile.
The second disk contains the Oracle installation (ORACLE_HOME to those of you who know what it means). Oracle only uses 2GB of RAM because it is a 32-bit OS and application.
There is a 2-port Qlogic 2gb Fibre Channel HBA connecting to an EMC Clariion with 12x 500GB 7200rpm SATA disks in RAID10. Networking is handled via 2x Intel 1Gb/s ethernet links using NIC teaming and link aggregation in the switch side.
Yes, this "piddly" little laptop costing ~$4k totally running a 32-bit VM smokes this blade server with fibre channel SAN attached. It is amazing how far technology has come in the past few years, and I can't help but smile knowing it will only get better over the next few years.
To put this into perspective, today I tested a little svelte Sony VAIO VPC-Z1190X extremely portable laptop with an i7 Quad-core CPU, 8GB of RAM, with 2x 250gb factory SSD's in RAID0. Right out of the box, it was brutally fast. I setup a VMWare Player VM running XP Pro SP3 with 2x processors, and 4GB of RAM. 2 Virtual Disks of 30GB and 60GB. The 30GB disk housed the OS and Oracle 10g R2 installation with our applications. The second disk housed the actual database files. I imported one of our QA schemas into the local instance. This local Oracle instance was allocated 2GB of physical RAM. I setup our applications as usual on the same primary disk.
I ran a few processes in our apps utilizing the local Oracle instance and it ran circles around our QA database server. I shook my head in amazement. A VM running Oracle outperformed our PRIMARY QA database server by a LONG shot.
Our primary QA database server's specs:
Dell PowerEdge 1955 Blade, 2x 3.2GHZ Xeon CPU's, 8GB of RAM, 2x local 300GB 10k SCSI disks.
The first disk houses the OS Installation (Windows Server 2003 R2 STD, 32bit) and the pagefile.
The second disk contains the Oracle installation (ORACLE_HOME to those of you who know what it means). Oracle only uses 2GB of RAM because it is a 32-bit OS and application.
There is a 2-port Qlogic 2gb Fibre Channel HBA connecting to an EMC Clariion with 12x 500GB 7200rpm SATA disks in RAID10. Networking is handled via 2x Intel 1Gb/s ethernet links using NIC teaming and link aggregation in the switch side.
Yes, this "piddly" little laptop costing ~$4k totally running a 32-bit VM smokes this blade server with fibre channel SAN attached. It is amazing how far technology has come in the past few years, and I can't help but smile knowing it will only get better over the next few years.
Monday, July 26, 2010
Sunday, July 25, 2010
Fusion IO vs. OCZ Z-Drive R2
I've finally finished testing the FusionIO IODrive in 320GB and the OCZ Z-Drive P88 R2 in 1TB.
The spreadsheets with the performance metrics for both SSD's are here:http://public.me.com/ndlucas
They are both good performers in IOP/s with the Fusion IO edging out the OCZ on the smaller IO sizes.
Raw performance showed the OCZ was the clear winner.
With that out of the way, the question is now, which one makes more sense for your application and budget?
The 320GB FusionIO is priced in the $7000 range.
The 1TB OCZ Z-Drive R2 is in the $4600 range.
I have a feeling that FusionIO has enterprise support standing by, where OCZ does not. Maybe I am wrong, but that is the impression I get. For my purposes I will get an extra Z-Drive for a cold-spare while waiting for OCZ to turn around a replacement in the event of a failure.
-Noel
The spreadsheets with the performance metrics for both SSD's are here:http://public.me.com/ndlucas
They are both good performers in IOP/s with the Fusion IO edging out the OCZ on the smaller IO sizes.
OCZ IOPS |
FusionIO IOPS |
Raw performance showed the OCZ was the clear winner.
OCZ RAW Performance |
Fusion IO RAW Performance |
With that out of the way, the question is now, which one makes more sense for your application and budget?
The 320GB FusionIO is priced in the $7000 range.
The 1TB OCZ Z-Drive R2 is in the $4600 range.
I have a feeling that FusionIO has enterprise support standing by, where OCZ does not. Maybe I am wrong, but that is the impression I get. For my purposes I will get an extra Z-Drive for a cold-spare while waiting for OCZ to turn around a replacement in the event of a failure.
-Noel
Some useful stuff for disk & SAN benchmarking
After toying with SQLIO, I found it a bit aggravating in deciphering it's abundance of output data.
I found a useful Powershell script by Jonathan Kehayias that takes your output test file from SQLIO, and creates an Excel spreadsheet with charts to visualize the results. VERY cool utility.
After running your SQLIO tests, just run the powershell script and input the filename, and magically, Excel opens and starts pulling the raw data from the textfile into it and graphing it. THIS is how IT is supposed to be!
After installing SQLIO, the first thing I did was to setup the param.txt file for SQLIO (in the install directory).
I removed all contents and added this:
o:\testfile.dat 8 0x0 32768
This means all tests will create a testfile on the "O:\" disk named testfile.dat, use 8 threads designated by the "8" and it will be sized 32GB. The value at the end is in MB.
Then I created a script in the same directory called "santest.bat"
I added the following entries:
sqlio -kW -t8 -s300 -dO -o8 -frandom -b8 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b16 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b32 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b64 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b128 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b256 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b512 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b1024 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b2048 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b4096 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b8192 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b8 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b16 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b32 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b64 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b128 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b256 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b512 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b1024 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b2048 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b4096 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b8192 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b8 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b16 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b32 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b64 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b128 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b256 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b512 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b1024 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b2048 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b4096 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b8192 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b8 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b16 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b32 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b64 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b128 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b256 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b512 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b1024 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b2048 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b4096 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b8192 -BH -LS -Fparam.txt
---------------------------------------------------------------------------------------------------
-kW denotes WRITE operation as kR denotes READ operation
-t8 denotes 8 threads (it will always take the parameter in your param.txt and ignore this setting)
-s300 denotes the length to test this specific test (in this case 300 seconds = 5 minutes)
-dO designates the O:\ disk as the target of the test
-o8 means 8 outstanding requests
-frandom means random access, -fsequential means sequential access
-bx designates the IO size to be tested
-BH means that if you have disk cache, to use it. (-BN will disable use of the cache and force read-write directly to disk instead of cache)
I used IO sizes from 8kb to 8192kb in size to test as many sizes as possible in the case above.
Then, open a command-line DOS prompt and navigate to the SQLIO installation directory.
Run the batch file with logfile name as in:
santest.bat > results.txt
This will take some time to finish, and in the example script above, this will be just a bit over 3.5 hours by adding the -s300 entries together.
Now that you have your disk benchmark results, it is time to set your machine to be able to run unsigned powershell scripts. Using an elevated Powershell session, run:
Set-ExecutionPolicy Unrestricted
Then accept (Y) the warning
Finally, use the powershell script from this blog post.
I would save your result text file on your desktop with the powershell script.
When it asks for the input filename, enter it and away it goes magically opening Excel and importing the result data from your text file. Very cool stuff.
I found a useful Powershell script by Jonathan Kehayias that takes your output test file from SQLIO, and creates an Excel spreadsheet with charts to visualize the results. VERY cool utility.
After running your SQLIO tests, just run the powershell script and input the filename, and magically, Excel opens and starts pulling the raw data from the textfile into it and graphing it. THIS is how IT is supposed to be!
After installing SQLIO, the first thing I did was to setup the param.txt file for SQLIO (in the install directory).
I removed all contents and added this:
o:\testfile.dat 8 0x0 32768
This means all tests will create a testfile on the "O:\" disk named testfile.dat, use 8 threads designated by the "8" and it will be sized 32GB. The value at the end is in MB.
Then I created a script in the same directory called "santest.bat"
I added the following entries:
sqlio -kW -t8 -s300 -dO -o8 -frandom -b8 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b16 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b32 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b64 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b128 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b256 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b512 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b1024 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b2048 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b4096 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -frandom -b8192 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b8 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b16 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b32 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b64 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b128 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b256 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b512 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b1024 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b2048 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b4096 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -frandom -b8192 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b8 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b16 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b32 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b64 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b128 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b256 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b512 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b1024 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b2048 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b4096 -BH -LS -Fparam.txt
sqlio -kW -t8 -s300 -dO -o8 -fsequential -b8192 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b8 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b16 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b32 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b64 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b128 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b256 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b512 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b1024 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b2048 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b4096 -BH -LS -Fparam.txt
sqlio -kR -t8 -s300 -dO -o8 -fsequential -b8192 -BH -LS -Fparam.txt
---------------------------------------------------------------------------------------------------
-kW denotes WRITE operation as kR denotes READ operation
-t8 denotes 8 threads (it will always take the parameter in your param.txt and ignore this setting)
-s300 denotes the length to test this specific test (in this case 300 seconds = 5 minutes)
-dO designates the O:\ disk as the target of the test
-o8 means 8 outstanding requests
-frandom means random access, -fsequential means sequential access
-bx designates the IO size to be tested
-BH means that if you have disk cache, to use it. (-BN will disable use of the cache and force read-write directly to disk instead of cache)
I used IO sizes from 8kb to 8192kb in size to test as many sizes as possible in the case above.
Then, open a command-line DOS prompt and navigate to the SQLIO installation directory.
Run the batch file with logfile name as in:
santest.bat > results.txt
This will take some time to finish, and in the example script above, this will be just a bit over 3.5 hours by adding the -s300 entries together.
Now that you have your disk benchmark results, it is time to set your machine to be able to run unsigned powershell scripts. Using an elevated Powershell session, run:
Set-ExecutionPolicy Unrestricted
Then accept (Y) the warning
Finally, use the powershell script from this blog post.
I would save your result text file on your desktop with the powershell script.
When it asks for the input filename, enter it and away it goes magically opening Excel and importing the result data from your text file. Very cool stuff.
Saturday, July 24, 2010
SQLIO testing underway.
I've been running some SQLIO tests on the Fusion IO and OCZ PCI-E SSD's today. I'm still running them now, and will be "making sense" of the results for a post in the new future. There's so many options in SQLIO that you can spend a LOT of time with it. The good thing is, it shows latency and IO data as well as raw disk performance. Really cool utility. You can check it out here. Yes, it runs on Windows Server 2008 R2 just fine. i highly recommend checking our Brent Ozar's write-up/tutorial on it at SQLServerPedia. Be sure to read the readme and param.txt files in the installation directory. I didn't see it mentioned in his tutorial. The param.txt will let you specify the filesize for the test file(s) so you don't run out of space while testing.
The next week will be a fun one for sure.
-Noel
The next week will be a fun one for sure.
-Noel
Friday, July 23, 2010
More SSD testing
Always good to see CPU's pegged:
Almost ZERO I/O waits while importing a large schema (normally I see a LOT of blue here):
Almost ZERO I/O waits while importing a large schema (normally I see a LOT of blue here):
SSD Testing
I've been doing a lot of testing on SSD lately. I've experimented with RAM disks (partitioning physical RAM as a storage disk), and SSD's from Fusion IO and OCZ. It has been interesting because I am considering SSD as a storage medium for my company's Hyper-V VM storage and Oracle Database storage. It is already compelling enough that when our desktop/laptop refresh begins in approximately 2 years, we will be using SSD in those because the price will be affordable enough to compare to traditional spindle storage at that time. The performance benefits are compelling in that the productivity boost will more than pay for the slight cost premium of the SSD. Developers, QA Testers and business folks will be far more productive. This will likely be consumer-grade SSD such as the Intel X25-M. I plan to have the OS and applications installed on the SSD. QA testers will have their VM's stored on 15k SAS disks like we currently do. Developers will have their local Oracle instances running on large 7200RPM SATA disks as we currently do as well.
Server-Class SSD
I heard about a company called Fusion IO when they were a small start-up several years ago. I saw their benchmark testing/marketing collateral video showing them copy a DVD from one SSD to another in a matter of a few seconds. Very compelling. They were small then, and only had a few engineering samples that were unable to be shipped out to potential customers. I was interested in it, but it was still too new at the time, so I put it in the back of my mind.
Several months ago, I realized that our servers and storage would be coming to their end-of-lease over the next 12 months, so I started planning for their replacement. I noticed that SSD prices had dropped drastically and began investigating. We've already made the decision to no longer utilize fibre-channel SAN or iSCSI storage because of disk contention issues that we have experienced between our Oracle servers utilizing the shared-storage model. It simply didn't work for us. To architect a SAN correctly for our company that would not present contention issues would be too costly. I priced an EMC Clariion CX4 8gbps fibre-channel SAN with 15000rpm SaS disks, Enterprise Flash Drives (EFD), and 7200RPM SATA disks. The plan was to make it our only storage platform for simplicity. The plan was to put DEV/QA databases on the EFD, client databases on the 15k SAS disks, and disk-to-disk backup of servers on the SATA disks for long-term archiving.
This scenario would work, but again, we did not want the contention potential of the SAN storage. The price was also extremely high at ~$280,000. ~$157,000 of it was the EFD modules. That prices the 400GB EMC EFD's at approximately $26,000 each. Insane? You bet. I knew there HAD to be a better way to get the benefits of SSD at a lower cost.
A few months ago, my interest was piqued at the OCZ Z-Drive. I wanted to test it alongside the Fusion IO product since they were both PCI-Express "cards" that could deliver the storage isolation we require with the performance we desired. The Fusion IO product is the most expensive PCI-E SSD alongside the Texas Memory Systems PCI-E SSD. The Fusion IO 320GB iodrive goes for approximately $7000, and the Texas Memory Systems RAMSAN-20 is priced at approximately $18,000 for 450gb. Yes, the performance is compelling, but the price is too much in my opinion. Lots of companies already use the TMS and Fusion IO products, and it works great for them because they can afford it. My company, however, can afford it, but I will not spend our money foolishly without investigating all options.
I reached out to the folks at Fusion IO, and was put in contact with a nice guy named Nathan who is a field sales engineer in my neck of the woods. He promptly shipped out a demo model of the IODrive in 320gb configuration. It arrived in a nice box that reminded me of Apple packaging.
Ironically enough, the Chief Scientist at Fusion IO is "The Woz", of Apple fame. I installed it into my testbed server which is a Dell PowerEdge 6950 with quad Dual-Core AMD Opteron 2.6ghz processors, 32GB of RAM, and 4x 300GB 15k SaS drives in RAID10 on a Perc5i/R controller. The Fusion IO SSD advertised 700MB/s read and 500MB/s write performance with ~120000 IOPS. Great performance indeed. It lived up to those specs and even surpassed them a bit as shown below.
I ordered the 1TB OCZ Z-Drive R2 P88 from my Dell rep, and it was delivered yesterday. It was in decent packaging, but not nearly as snazzy as the Fusion IO product. See it below.
I installed it alongside the Fusion IO SSD in the same server and started it up. The only difference in the two is that OCZ requires a PCI-E X8 slot and the Fusion IO requires a X4 slot. To my surprise, my server saw it during POST right away and identified it as an LSI RAID card shown below. This tells me that OCZ uses LSI controllers in the construction of their SSD's. I have had good results and performance with LSI RAID controllers over the years in contrast to some guys in my line of work who hate LSI products and worship Adaptec controllers.
After booting up and logging in, Windows Server 2008 R2 (standard, 64-bit) saw it and immediately installed the correct built-in Windows driver for it. I formatted it and assigned it a drive letter. ATTO was ran again to get some raw disk performance metrics. OCZ advertises 1.4GB/s read & write burst performance, and ~950MB/s sustained. See my results below:
Yes, clearly the OCZ just obliterates the Fusion IO SSD. Why? My guess is the internal 8-way RAID0 configuration of the OCZ card.
That's >3x the storage of the Fusion IO IODrive, almost triple the write performance, almost double the read performance, at approximately 35% less cost. The 1TB OCZ is only ~$4600.
I have heard that a 1.2TB Fusion IO model is on the way at a cost of approximately $22,000. I can't speculate on it's performance though. It is unknown at the present time.
My only concern at this point is support. OCZ states a 15-day turnaround for replacement hardware. In the case of using only one, I'd buy two and keep one on the shelf as a cold-spare to use while waiting on the replacement. Since I will probably implement 6 more over the next 6-8 months, I will likely buy two extras to have on-hand in the event of a failure.
Adding large datafiles to an Oracle 10g instance is a slow & painful process at times. Not anymore:
See that? 1271MB/s I have a script that adds 10 datafiles to each tablespace (5 total) sized at 8GB each, with 1GB auto-extend. This normally takes HOURS to run. I ran it in less than 10 minutes just a few minutes ago.
Update: See THIS POST for details of my experience with total failure of the OCZ product. I wouldn't even consider their product at this point. If you're reading this thinking of buying the OCZ product for BUSINESS use, listen to me: RUN, do not walk, RUN to the good folks over at Fusion-io. You can thank me later.
Server-Class SSD
I heard about a company called Fusion IO when they were a small start-up several years ago. I saw their benchmark testing/marketing collateral video showing them copy a DVD from one SSD to another in a matter of a few seconds. Very compelling. They were small then, and only had a few engineering samples that were unable to be shipped out to potential customers. I was interested in it, but it was still too new at the time, so I put it in the back of my mind.
Several months ago, I realized that our servers and storage would be coming to their end-of-lease over the next 12 months, so I started planning for their replacement. I noticed that SSD prices had dropped drastically and began investigating. We've already made the decision to no longer utilize fibre-channel SAN or iSCSI storage because of disk contention issues that we have experienced between our Oracle servers utilizing the shared-storage model. It simply didn't work for us. To architect a SAN correctly for our company that would not present contention issues would be too costly. I priced an EMC Clariion CX4 8gbps fibre-channel SAN with 15000rpm SaS disks, Enterprise Flash Drives (EFD), and 7200RPM SATA disks. The plan was to make it our only storage platform for simplicity. The plan was to put DEV/QA databases on the EFD, client databases on the 15k SAS disks, and disk-to-disk backup of servers on the SATA disks for long-term archiving.
This scenario would work, but again, we did not want the contention potential of the SAN storage. The price was also extremely high at ~$280,000. ~$157,000 of it was the EFD modules. That prices the 400GB EMC EFD's at approximately $26,000 each. Insane? You bet. I knew there HAD to be a better way to get the benefits of SSD at a lower cost.
A few months ago, my interest was piqued at the OCZ Z-Drive. I wanted to test it alongside the Fusion IO product since they were both PCI-Express "cards" that could deliver the storage isolation we require with the performance we desired. The Fusion IO product is the most expensive PCI-E SSD alongside the Texas Memory Systems PCI-E SSD. The Fusion IO 320GB iodrive goes for approximately $7000, and the Texas Memory Systems RAMSAN-20 is priced at approximately $18,000 for 450gb. Yes, the performance is compelling, but the price is too much in my opinion. Lots of companies already use the TMS and Fusion IO products, and it works great for them because they can afford it. My company, however, can afford it, but I will not spend our money foolishly without investigating all options.
I reached out to the folks at Fusion IO, and was put in contact with a nice guy named Nathan who is a field sales engineer in my neck of the woods. He promptly shipped out a demo model of the IODrive in 320gb configuration. It arrived in a nice box that reminded me of Apple packaging.
Ironically enough, the Chief Scientist at Fusion IO is "The Woz", of Apple fame. I installed it into my testbed server which is a Dell PowerEdge 6950 with quad Dual-Core AMD Opteron 2.6ghz processors, 32GB of RAM, and 4x 300GB 15k SaS drives in RAID10 on a Perc5i/R controller. The Fusion IO SSD advertised 700MB/s read and 500MB/s write performance with ~120000 IOPS. Great performance indeed. It lived up to those specs and even surpassed them a bit as shown below.
I ordered the 1TB OCZ Z-Drive R2 P88 from my Dell rep, and it was delivered yesterday. It was in decent packaging, but not nearly as snazzy as the Fusion IO product. See it below.
I installed it alongside the Fusion IO SSD in the same server and started it up. The only difference in the two is that OCZ requires a PCI-E X8 slot and the Fusion IO requires a X4 slot. To my surprise, my server saw it during POST right away and identified it as an LSI RAID card shown below. This tells me that OCZ uses LSI controllers in the construction of their SSD's. I have had good results and performance with LSI RAID controllers over the years in contrast to some guys in my line of work who hate LSI products and worship Adaptec controllers.
After booting up and logging in, Windows Server 2008 R2 (standard, 64-bit) saw it and immediately installed the correct built-in Windows driver for it. I formatted it and assigned it a drive letter. ATTO was ran again to get some raw disk performance metrics. OCZ advertises 1.4GB/s read & write burst performance, and ~950MB/s sustained. See my results below:
Yes, clearly the OCZ just obliterates the Fusion IO SSD. Why? My guess is the internal 8-way RAID0 configuration of the OCZ card.
That's >3x the storage of the Fusion IO IODrive, almost triple the write performance, almost double the read performance, at approximately 35% less cost. The 1TB OCZ is only ~$4600.
I have heard that a 1.2TB Fusion IO model is on the way at a cost of approximately $22,000. I can't speculate on it's performance though. It is unknown at the present time.
My only concern at this point is support. OCZ states a 15-day turnaround for replacement hardware. In the case of using only one, I'd buy two and keep one on the shelf as a cold-spare to use while waiting on the replacement. Since I will probably implement 6 more over the next 6-8 months, I will likely buy two extras to have on-hand in the event of a failure.
Adding large datafiles to an Oracle 10g instance is a slow & painful process at times. Not anymore:
See that? 1271MB/s I have a script that adds 10 datafiles to each tablespace (5 total) sized at 8GB each, with 1GB auto-extend. This normally takes HOURS to run. I ran it in less than 10 minutes just a few minutes ago.
Update: See THIS POST for details of my experience with total failure of the OCZ product. I wouldn't even consider their product at this point. If you're reading this thinking of buying the OCZ product for BUSINESS use, listen to me: RUN, do not walk, RUN to the good folks over at Fusion-io. You can thank me later.
Sunday, July 18, 2010
Mount a Windows share on a Linux machine as a local drive
This comes in handy at times. Create a folder on your linux machine:
mkdir /foldername
On the Windows Server, share your folder, giving your domain account full permissions.
On your linux server:
mount -t cifs \\\\windowsservername.domain.com\\sharename -o username=yourusername,password=yourpassword,domain=yournetbiosdomain /sharename
This mounts the remote share as /foldername, that you created earlier. Note: You must create the folder on your Linux machine first in order to mount the remote share to it.
I use this quite a bit on a linux-based Oracle DB server where I have a huge schema export to import, but lack the local disk space to copy it to and import it. I just mount the db export share to the linux DB server and import as usual.
This is even better than using a USB2.0 external hard drive, because Gigabit Ethernet provides much more bandwidth that USB2.0 can. Plus, having an optimized network utilizing jumbo frames will help even more.
Yes, I can't wait for 10GBase-T to drop in price. The CAT6A NIC's are running ~$700 each, but the 24-port switches (Summit X650 and Arista are the only players for now) are pricey. The 24-port Summit X650 runs ~$22k. The Arista runs ~$12k. I do like the Arista a bit better personally. It is a very nice datacenter/top-of-rack switch that is priced right. Now, Juniper Networks needs to pull their head out of the sand and realize that 10GBase-T DOES have a following and a lot of customers WANT IT. I for one, do not want to waste money on expensive optics nor buy from a vendor that I am unfamiliar with. My entire network is already Juniper-powered and I want to keep it that way. I want to use my existing CAT5E for 10G speeds. Yes, 10G is possible on CAT5E, for short runs of less than 55 meters if I am not mistaken. CAT6A is ideal, and doesn't cost a whole lot more than good quality CAT5E.
mkdir /foldername
On the Windows Server, share your folder, giving your domain account full permissions.
On your linux server:
mount -t cifs \\\\windowsservername.domain.com\\sharename -o username=yourusername,password=yourpassword,domain=yournetbiosdomain /sharename
This mounts the remote share as /foldername, that you created earlier. Note: You must create the folder on your Linux machine first in order to mount the remote share to it.
I use this quite a bit on a linux-based Oracle DB server where I have a huge schema export to import, but lack the local disk space to copy it to and import it. I just mount the db export share to the linux DB server and import as usual.
This is even better than using a USB2.0 external hard drive, because Gigabit Ethernet provides much more bandwidth that USB2.0 can. Plus, having an optimized network utilizing jumbo frames will help even more.
Yes, I can't wait for 10GBase-T to drop in price. The CAT6A NIC's are running ~$700 each, but the 24-port switches (Summit X650 and Arista are the only players for now) are pricey. The 24-port Summit X650 runs ~$22k. The Arista runs ~$12k. I do like the Arista a bit better personally. It is a very nice datacenter/top-of-rack switch that is priced right. Now, Juniper Networks needs to pull their head out of the sand and realize that 10GBase-T DOES have a following and a lot of customers WANT IT. I for one, do not want to waste money on expensive optics nor buy from a vendor that I am unfamiliar with. My entire network is already Juniper-powered and I want to keep it that way. I want to use my existing CAT5E for 10G speeds. Yes, 10G is possible on CAT5E, for short runs of less than 55 meters if I am not mistaken. CAT6A is ideal, and doesn't cost a whole lot more than good quality CAT5E.
Be multi-faceted
I always try to maintain an open mind. Since I primarily work with the Oracle RDMS, I always try to keep myself familiar with its competing products such as MySQL or Postgresql. Yes, I am better at some platforms than others, but if you lock yourself into one platform, then you can very possibly become obsolete over time. I used to be a Windows-only guy. Over time, I've experimented with Linux as an Oracle DBMS platform. Now that I am very comfortable managing Oracle on Linux, I find that it is my preferred platform by a large margin over Windows or Solaris. No, I don't run it on RHEL like a lot of shops. I run on CentOS for one simple reason: free updates, and no cost involved. Yes, Oracle doesn't support it, but it DOES run on it. CentOS is a binary equivalent of RHEL so much that Oracle identifies it so and installs just fine, and runs the same. I don't have to pay for maintenance & support or updates. My Oracle platforms are rock-solid as they would be on RHEL. A few people in the office are wary of this, but ultimately it comes down to me and I am the one supporting the environments. We have not had ANY issue that required an Oracle support request in the past 10+ years. If we did, I would setup a separate server with a trial of RHEL, install Oracle, and try to replicate the issue on the test environment. It has never come to that though. So, we get the rock solid platform we need, at ZERO cost.
To clarify before the flames begin: We are not a typical "production" Oracle shop. We are an ISV whose marketed applications run on an Oracle back-end. We develop our applications to utilize the Oracle RDBMS, and in-turn, we are a member of the Oracle Partner Network (OPN). We pay a few thousand a year for our OPN membership to use the Oracle RDBMS in a development environment. We pay extra for TAR support access as well, but rarely use it. If we had a live website using an Oracle back-end for order processing, etc, you can BET I'd be running on RHEL with RAC with full production licenses. But again, that's not in our solar system today.
The same thing goes for our virtualization platforms. I primarily use Hyper-V R2, but also use VMWare ESXi for server virtualization. Hyper-V costs almost nothing, and ESXi costs absolutely nothing. On the desktop virtualization front, we use VMWare Player for our QA and support teams. That costs nothing as well.
It's win-win in my eyes, as well as my boss who happens to be the CFO.
To clarify before the flames begin: We are not a typical "production" Oracle shop. We are an ISV whose marketed applications run on an Oracle back-end. We develop our applications to utilize the Oracle RDBMS, and in-turn, we are a member of the Oracle Partner Network (OPN). We pay a few thousand a year for our OPN membership to use the Oracle RDBMS in a development environment. We pay extra for TAR support access as well, but rarely use it. If we had a live website using an Oracle back-end for order processing, etc, you can BET I'd be running on RHEL with RAC with full production licenses. But again, that's not in our solar system today.
The same thing goes for our virtualization platforms. I primarily use Hyper-V R2, but also use VMWare ESXi for server virtualization. Hyper-V costs almost nothing, and ESXi costs absolutely nothing. On the desktop virtualization front, we use VMWare Player for our QA and support teams. That costs nothing as well.
It's win-win in my eyes, as well as my boss who happens to be the CFO.
Saturday, July 17, 2010
The Beginning....
Well, this is the beginning of something I have been thinking about doing for a long time. It's an attempt to publish my thoughts on managing the IT Operations of a small business. Small business IT is very different from big business IT. We don't have the large budgets, large staff, and corporate politics that is prevalent in a large organization's IT department. Many small business IT departments are a "one-man-show", or even one person who handles IT as a secondary or third role. That works for a lot of businesses, but when the business grows or changes, it is time to bring in outside help. Normally, that's where local IT Solutions providers come into play. They work great for the most part if you select a good one that doesn't have ulterior motives to "sell" you things that you don't really have to have or need, in order to "churn up" business and sales. That's where a lot of those IT Solutions providers make a mistake. When someone who "knows" what they are doing steps in, it often gets ugly. Sometimes, they aren't as responsive as you need them to be. That's where I got started at the company I presently work for over ten years ago.
My employer had been in business for 15 years as a different type of business serving a different market than we presently do. It was an accounting firm in the beginning that moved into the 401k record-keeping software business in the early 90's. The product ran off of the IBM AS400 platform. In the late 90's, the company transformed itself again into a financial-services software company. The applications ran on the Windows NT Server and Workstation platform with an Oracle back-end. Yes, some of the reason for the choice of Oracle as "our" DBMS was partly due to Oracle's reputation at the time. The large companies we were "wooing" already had the Oracle DBMS in place with the expertise to manage it. That made the choice somewhat easier. At the time, SQL Server wasn't even in the same solar system with Oracle.
Years ago, I thought the choice of Oracle wasn't the best choice based on it's high cost. What I see now is that our clients databases have grown exponentially in size. Yes, it is a fact in my experience that Oracle scales far above the capabilities of SQL Server. It did then and still does now. However, the gap has been narrowed, but it's still there. No, I am not biased towards Oracle. I am open to all vendors and solutions. I believe that each solution fits a particular niche or need that they all can't meet equally. Yes, our applications probably could run on SQL Server, but when you have tables that are measured in the hundred-gigabyte plus size range along with indices in the fifty-gigabyte or larger size, yes, Oracle outshines SQL Server. Go ahead and post your flaming comments, but it won't sway my opinion whatsoever. I will just delete your comment and move on. I'm not here to "make friends". Again, I am here writing *MY* thoughts and ideas. I will say, SQL Server bests Oracle in certain situations, as does MySQL or Postgresql. Believe me, they all have their place. I say the same thing about the age-old argument of Windows vs. Mac OS vs. Linux. I use all three for different tasks. I don't favor one over the other.
I prefer Windows Server 2008 R2 for user management, file sharing, print sharing, and a few other things. When it comes to corporate email, nothing rivals Exchange Server 2010 in my mind. Small business that can't afford Exchange or don't want the complexity, will likely LOVE Google Apps for it's simplicity and cost-savings. I would LOVE to move our business over to Google Apps Premier. My reasons for not doing it is that their support plain SUCKS. The trial I did showed me that. The only support available was via e-mail, and that was trial of futility. Canned script email responses that basically said "RTFM". I did, but it didn't answer my questions. I decided to just flush the project entirely. I'm not going to pay thousands of dollars for something to be told "RTFM". At least with Exchange, if I get stuck, at worst I get Prakash in Bangalore that DOES know what he is doing.
Yes, a long winded post, but I think it will give you a bit better insight to what I do.
I don't want to "write a book" for my introductory post, so, I will leave it at that for now.
Stay tuned for the next post. I have a LOT of ideas to post about in the future.
-Me
My employer had been in business for 15 years as a different type of business serving a different market than we presently do. It was an accounting firm in the beginning that moved into the 401k record-keeping software business in the early 90's. The product ran off of the IBM AS400 platform. In the late 90's, the company transformed itself again into a financial-services software company. The applications ran on the Windows NT Server and Workstation platform with an Oracle back-end. Yes, some of the reason for the choice of Oracle as "our" DBMS was partly due to Oracle's reputation at the time. The large companies we were "wooing" already had the Oracle DBMS in place with the expertise to manage it. That made the choice somewhat easier. At the time, SQL Server wasn't even in the same solar system with Oracle.
Years ago, I thought the choice of Oracle wasn't the best choice based on it's high cost. What I see now is that our clients databases have grown exponentially in size. Yes, it is a fact in my experience that Oracle scales far above the capabilities of SQL Server. It did then and still does now. However, the gap has been narrowed, but it's still there. No, I am not biased towards Oracle. I am open to all vendors and solutions. I believe that each solution fits a particular niche or need that they all can't meet equally. Yes, our applications probably could run on SQL Server, but when you have tables that are measured in the hundred-gigabyte plus size range along with indices in the fifty-gigabyte or larger size, yes, Oracle outshines SQL Server. Go ahead and post your flaming comments, but it won't sway my opinion whatsoever. I will just delete your comment and move on. I'm not here to "make friends". Again, I am here writing *MY* thoughts and ideas. I will say, SQL Server bests Oracle in certain situations, as does MySQL or Postgresql. Believe me, they all have their place. I say the same thing about the age-old argument of Windows vs. Mac OS vs. Linux. I use all three for different tasks. I don't favor one over the other.
I prefer Windows Server 2008 R2 for user management, file sharing, print sharing, and a few other things. When it comes to corporate email, nothing rivals Exchange Server 2010 in my mind. Small business that can't afford Exchange or don't want the complexity, will likely LOVE Google Apps for it's simplicity and cost-savings. I would LOVE to move our business over to Google Apps Premier. My reasons for not doing it is that their support plain SUCKS. The trial I did showed me that. The only support available was via e-mail, and that was trial of futility. Canned script email responses that basically said "RTFM". I did, but it didn't answer my questions. I decided to just flush the project entirely. I'm not going to pay thousands of dollars for something to be told "RTFM". At least with Exchange, if I get stuck, at worst I get Prakash in Bangalore that DOES know what he is doing.
Yes, a long winded post, but I think it will give you a bit better insight to what I do.
I don't want to "write a book" for my introductory post, so, I will leave it at that for now.
Stay tuned for the next post. I have a LOT of ideas to post about in the future.
-Me
Subscribe to:
Posts (Atom)