WEBVTT

00:00:00.160 --> 00:00:08.080
so I promised to do a video update when I had a chance to run the eight ocz Onyx

00:00:05.200 --> 00:00:13.440
Series ssds in raid zero so you can see I've got these hooked up to an LSI Mega

00:00:11.080 --> 00:00:19.000
raade fast path SSD optimized RAID controller and uh

00:00:17.279 --> 00:00:23.560
what fast path does is it actually allows ssds to perform a little bit

00:00:21.039 --> 00:00:27.960
better so fast path is just the little uh the little key that I have on there

00:00:25.920 --> 00:00:35.000
it also enables some Advanced encryption features as well as the ability to use

00:00:30.920 --> 00:00:37.239
ssds as a cache for a hard drive uh rate

00:00:35.000 --> 00:00:42.200
array so that's a this is a pretty cool card this is the

00:00:39.120 --> 00:00:44.719
9260 and it's uh it's actually a SAS I

00:00:42.200 --> 00:00:51.120
did an unboxing of it it's a SAS 6 gbit per second card but as you can see it's

00:00:46.800 --> 00:00:52.800
obviously compatible with uh SATA 2 ssds

00:00:51.120 --> 00:00:58.239
I shouldn't even call it SATA 2 what I mean to say is SATA 3 GB per second so

00:00:57.000 --> 00:01:02.239
here I'll show you actually here why don't we take this video as an

00:01:00.000 --> 00:01:07.560
opportunity to show you how easy it is to set up a RAID controller card uh in a

00:01:05.479 --> 00:01:10.960
raid zero configuration so first of all I want to show you some benchmarks here

00:01:09.240 --> 00:01:17.240
you can see that with this array we were able to achieve uh almost 700 megabytes

00:01:14.040 --> 00:01:19.360
per second read which honestly shouldn't

00:01:17.240 --> 00:01:25.720
be that impressive this card is capable of well over 1 GB per second read with

00:01:21.560 --> 00:01:27.560
the right ssds so that I was actually a

00:01:25.720 --> 00:01:31.960
little bit disappointed in but the right speeds are very impressive the right

00:01:29.600 --> 00:01:37.720
scale scaled almost linearly on these ssds so we were able to achieve up to

00:01:34.000 --> 00:01:39.280
450 mb per on rights so I'll show you

00:01:37.720 --> 00:01:44.799
what I mean by all of this in just a moment here

00:01:41.799 --> 00:01:47.880
um this is kind of interesting I wasn't

00:01:44.799 --> 00:01:49.479
having any challenges with the uh with

00:01:47.880 --> 00:01:54.600
the raid configuration before or you know what I think it

00:01:51.439 --> 00:01:58.039
is well looks like you get to

00:01:54.600 --> 00:02:00.240
uh see some real time troubleshooting

00:01:58.039 --> 00:02:06.560
here I think we've just got the wrong IP address cuz I was using this at home so

00:02:02.520 --> 00:02:10.280
uh I have the the system IP wrong 192168

00:02:06.560 --> 00:02:11.920
3185 so let's just do

00:02:10.280 --> 00:02:17.319
that and there we go so let me just log in

00:02:18.000 --> 00:02:24.840
here all right and now we're going to go into the

00:02:22.239 --> 00:02:28.480
uh the mega raade configuration so the reason I'm a little bit disappointed

00:02:26.560 --> 00:02:32.920
with the with the overall read speeds is you can see with well you would were

00:02:30.959 --> 00:02:37.879
able to see you can see with a twood drive array so this is using all the

00:02:34.959 --> 00:02:42.879
same raid configuration uh we were able to achieve 307 megabytes per second

00:02:40.440 --> 00:02:47.159
already now with ssds typically with a good quality RAID controller like we

00:02:44.519 --> 00:02:51.480
have and uh and a good quality SSD you should see almost completely linear

00:02:49.200 --> 00:02:57.519
scaling so while these drives are rated for uh 125 mb per second reads uh

00:02:55.400 --> 00:03:01.440
maximum you can see with two drives we were actually seeing that these drives

00:02:59.360 --> 00:03:05.920
perform a little bit better than spec and they're scaling extremely well

00:03:03.239 --> 00:03:11.480
because only two drives yield such strong read results now the this down

00:03:08.599 --> 00:03:16.239
here is the random uh performance so you can see even with uh eight

00:03:13.959 --> 00:03:20.519
drives uh random performance doesn't change very much so we've only got about

00:03:18.000 --> 00:03:24.920
6,000 iops in 4k performance which is actually still very good compared to any

00:03:22.760 --> 00:03:29.840
hard drive I mean you got to sort of keep that in perspective but uh

00:03:27.400 --> 00:03:33.799
performance didn't really scale much from two drives all the way up to eight

00:03:31.959 --> 00:03:38.120
drives so you can see two drives we were seeing about 20 megabytes per second and

00:03:35.640 --> 00:03:43.080
then up to eight drives we see about 25 now with a deeper Q depth and a bigger

00:03:40.360 --> 00:03:47.480
array you do see more uh you do see better performance and what that means

00:03:44.879 --> 00:03:51.640
is that if you're multitasking a lot on a huge array like this you're going to

00:03:49.400 --> 00:03:56.400
see better performance and it will continue to scale versus if you're not

00:03:54.120 --> 00:04:01.959
multitasking and you're just uh reading and writing small 4K files now another

00:03:59.959 --> 00:04:08.040
thing that did scale really really well with adding more drives was the rights

00:04:04.680 --> 00:04:10.120
so you can see right iops are almost 3x

00:04:08.040 --> 00:04:14.400
when we add uh four times as many drives so that's still reasonably good scaling

00:04:12.959 --> 00:04:18.239
so I just want to show you the numbers as we go through here so remember this

00:04:16.199 --> 00:04:23.880
is uh eight drives this is the run I just did like two seconds ago and then

00:04:20.919 --> 00:04:29.120
this one here is with two drives so this is with four drives so you can see from

00:04:26.360 --> 00:04:35.280
2 to four did not scale nearly as well so back to to two up to four did not

00:04:31.919 --> 00:04:37.960
scale nearly as well in reads as from 1

00:04:35.280 --> 00:04:43.680
to two because from 1 to two we got almost linear

00:04:40.720 --> 00:04:49.039
scaling but you can see that especially rights scaled incredibly well so our

00:04:46.639 --> 00:04:54.320
sequential rights doubled actually a little bit more than doubled and then

00:04:50.680 --> 00:04:56.520
our 4K rights more than doubled so very

00:04:54.320 --> 00:05:02.919
very efficient there and so then from four drives I went to six drives so once

00:04:59.320 --> 00:05:05.759
again you see quite limited

00:05:02.919 --> 00:05:09.120
scaling in some areas but again excellent scaling on the sequential

00:05:07.360 --> 00:05:13.600
right so that's where we're seeing just just huge improvements and then finally

00:05:11.479 --> 00:05:17.560
this is an eight Drive run that I did before I I was using slightly better

00:05:15.479 --> 00:05:21.880
tweaked raid settings so that's probably actually the one I should be showing you

00:05:19.039 --> 00:05:24.520
more than more than any other one but uh but yeah that's how I was able to

00:05:23.120 --> 00:05:29.120
squeeze just a little bit more sequential readout of the whole thing

00:05:26.520 --> 00:05:35.240
and uh actually substantially better uh 4K random rights so anyway that was

00:05:32.160 --> 00:05:37.080
pretty much my video um I think that

00:05:35.240 --> 00:05:43.720
what we've discovered here is that using 8 ssds of uh very very low performance I

00:05:41.000 --> 00:05:48.160
mean the Onyx is a value SSD there's uh there's no two ways about it uh I think

00:05:46.280 --> 00:05:52.840
what we'd be better off with is using something like four to six higher

00:05:50.520 --> 00:05:58.520
performance ssds uh something like a Vertex 2 I mean vertex 2 right out of

00:05:55.720 --> 00:06:03.800
the box is going to outperform probably anywhere from 2 to to three of these

00:06:00.560 --> 00:06:05.360
Onyx ssds so what I discovered uh

00:06:03.800 --> 00:06:09.639
running this experiment is that you're probably better off with fewer ssds but

00:06:07.880 --> 00:06:13.160
higher performance ones so I'll just show you really quickly when you have a

00:06:11.520 --> 00:06:17.759
when you have a premium raid card like this how easy and how quick it is to set

00:06:15.400 --> 00:06:21.840
up a uh a raid array um all you really do is go into

00:06:20.759 --> 00:06:28.360
actually you know what no I can't show you this because I wanted to actually boot up to the uh to the SSD raid array

00:06:26.280 --> 00:06:32.039
so uh nope I can't wipe out my array but it's really fast you have to just

00:06:30.039 --> 00:06:35.319
believe me you click this button up here you select the drives you press raid

00:06:33.599 --> 00:06:39.720
zero and then there are actually some options that you can configure so if I

00:06:37.759 --> 00:06:43.919
go into my virtual drive here I believe I can go to Virtual Drive and I can s

00:06:42.520 --> 00:06:47.680
all the Virtual Drive properties so these are some of the things that that

00:06:45.720 --> 00:06:52.680
you can Tinker around with if you do set up an SSD raate array so I've set things

00:06:50.520 --> 00:06:56.440
up uh according to how LSI has recommended for the best fast Path

00:06:54.400 --> 00:07:01.960
Performance although some people do report better performance with no read

00:06:58.400 --> 00:07:04.360
ahead um and some people do get better

00:07:01.960 --> 00:07:09.240
performance with right back cach enabled as well as cash Dio it really varies

00:07:07.160 --> 00:07:12.599
depending on your SSD so you have to tweak and find out what works well with

00:07:10.960 --> 00:07:18.160
your controller and your SSD so thanks for checking out our little uh raid Zero

00:07:15.960 --> 00:07:22.199
Performance video don't forget to subscribe to lus Tech tips
