WEBVTT

00:00:00.120 --> 00:00:05.879
if you've been following me on social media you've probably spent a fair bit

00:00:03.560 --> 00:00:10.679
of your time lately feeling bad for me about all of the ssds that I had to

00:00:08.160 --> 00:00:14.960
mount in our new 24 Drive Solid State Storage server no all right well then

00:00:13.599 --> 00:00:18.920
you've probably at least been hoping that I'll make a video about it at some

00:00:16.640 --> 00:00:26.599
point and talk about the performance and that time is now this is the allnew

00:00:22.840 --> 00:00:27.610
wanic the fastest beast machine in our

00:00:26.599 --> 00:00:30.670
office

00:00:35.559 --> 00:00:42.719
the Corsair HX 1200i power supply delivers 80 plus Platinum efficiency for

00:00:40.160 --> 00:00:47.280
quiet performance and Corsair link digital Advanced monitoring and control

00:00:45.559 --> 00:00:53.960
click now to learn more so our current storage server rusin

00:00:50.760 --> 00:00:56.640
uses Seagate 3 tbte consumer drives in a

00:00:53.960 --> 00:01:00.800
raid 6 array to achieve respectable readand WR performance and some fault

00:00:59.120 --> 00:01:04.920
tolerance the array can actually lose up to two drives before suffering

00:01:02.600 --> 00:01:10.560
catastrophic data loss assuming it's able to rebuild before more drives fail

00:01:07.720 --> 00:01:16.600
or an unrecoverable error occurs this is all fine and good but the main problem

00:01:13.240 --> 00:01:19.600
with it is that rusin was built for one

00:01:16.600 --> 00:01:21.960
Editor to work on 4K video files at Max

00:01:19.600 --> 00:01:27.920
Speed and we now have a whole room full of editors so while the rusin 10 GB

00:01:25.240 --> 00:01:33.799
network interface and sequential data speeds aren't really bottleneck X its

00:01:30.920 --> 00:01:38.360
mechanical drives are much more suitable for a single person workflow so I

00:01:36.880 --> 00:01:44.719
reached out to our good buddies at Kingston with a crazy idea what if we

00:01:42.200 --> 00:01:50.600
slipped free of the Surly Bonds of mechanical storage and danced the skies

00:01:47.560 --> 00:01:53.880
on SSD silvered Wings to which they kind

00:01:50.600 --> 00:01:57.960
of went um how much silver lonus I told

00:01:53.880 --> 00:02:01.360
them I wanted 24 1 tbte class drives and

00:01:57.960 --> 00:02:03.079
dog G it for some reason said yes I

00:02:01.360 --> 00:02:08.080
think the most incredible thing about that story is how much the landscape has

00:02:05.600 --> 00:02:12.720
changed in such a short amount of time two years ago I could have been the Pope

00:02:10.239 --> 00:02:19.440
in Rome and any SSD maker would have laughed at me for wanting 20 terabytes

00:02:16.080 --> 00:02:22.800
of redundant SSD storage in a single

00:02:19.440 --> 00:02:25.319
server but in 2015 Kingston's just like

00:02:22.800 --> 00:02:32.160
yeah we've got the Enterprise grade KC 310 it's got an 8 channel fisen S10

00:02:28.200 --> 00:02:34.440
controller 960 gigabyt of capacity ECC

00:02:32.160 --> 00:02:37.720
flash protection for data Integrity power loss protection trim support

00:02:36.400 --> 00:02:42.519
although we'll be relying on idle garbage collection in rate anyway and

00:02:39.640 --> 00:02:46.080
it's under 60 cents per gig I mean Holy balls I'm actually wearing the right

00:02:43.680 --> 00:02:51.840
shirt for that so let's talk upgrade process then the first thing I needed

00:02:48.319 --> 00:02:54.480
was way better raid cards yes cards not

00:02:51.840 --> 00:03:00.560
a single card there are 24 Port controllers in fact the old server has

00:02:56.800 --> 00:03:03.400
one but since each individual SS D is

00:03:00.560 --> 00:03:08.560
capable of 500 plus megabytes per second read and write speeds if you hook 24 of

00:03:06.879 --> 00:03:12.799
them up to a single card with a theoretical total speed in the

00:03:10.080 --> 00:03:17.239
neighborhood of 12 gbt per second you're going to run into some pretty serious

00:03:14.319 --> 00:03:20.879
bottlenecks all over the place so after removing the placeholder mechanical

00:03:18.720 --> 00:03:27.280
drives from the system laboriously mounting 24 ssds on sleds and connecting

00:03:24.319 --> 00:03:32.959
the sff 887 connectors Each of which handles four drives to their back plane

00:03:29.480 --> 00:03:34.560
in my Norco RPC 4224 chassis man I love

00:03:32.959 --> 00:03:42.319
these things on Kingston's recommendation I picked up three LSI

00:03:37.319 --> 00:03:47.239
9271 8i 8 Port raid cards each in a PCI

00:03:42.319 --> 00:03:49.319
Express 3.x slot this is where the x99

00:03:47.239 --> 00:03:54.000
platform really shows its value because you're going to need enough PCI Express

00:03:51.599 --> 00:03:59.200
Lanes to handle all that storage bandwidth something that consumer grade

00:03:56.000 --> 00:04:00.959
platforms simply cannot provide now

00:03:59.200 --> 00:04:04.760
something a lot lot of people commented on when I posted a picture of these

00:04:02.480 --> 00:04:09.599
cards on Instagram was that these cards run really hot and I had them installed

00:04:07.280 --> 00:04:14.319
right next to each other don't worry I'm using a 90mm fan mounted directly on top

00:04:12.239 --> 00:04:18.320
of them for auxiliary Cooling and I'll be bolting that in before I install this

00:04:16.199 --> 00:04:22.360
server in our fancy rack cabinet at the new office so with all the drives

00:04:20.799 --> 00:04:26.280
installed the next step was getting firmware updates and drivers taking care

00:04:24.280 --> 00:04:31.199
of for my controllers and configuring arrays naturally the first thing I did

00:04:29.000 --> 00:04:36.479
was throw the whole thing in raid zero for laws to see how fast it would go

00:04:34.199 --> 00:04:41.320
there's a bit of a special process for this in this case though you need to

00:04:38.320 --> 00:04:44.400
create a raid zero array of eight drives

00:04:41.320 --> 00:04:47.000
on each of the controller cards then use

00:04:44.400 --> 00:04:51.800
software raid to put them all together so in my case that required the use of

00:04:49.440 --> 00:04:56.639
dis Management in Windows to set each raid zero as a dynamic drive then stripe

00:04:55.039 --> 00:05:04.199
the whole thing together so it's kind of like raid 0000 or something like that

00:05:00.400 --> 00:05:06.400
the results were well if Shia were here

00:05:04.199 --> 00:05:13.120
I guess she'd say that don't impress me much read speeds were great even for

00:05:09.840 --> 00:05:16.240
512k transactions I'm looking at over 5

00:05:13.120 --> 00:05:18.039
12 gigabytes per second I mean remember

00:05:16.240 --> 00:05:21.960
this is for video editing so very little of what we deal with is going to be

00:05:19.680 --> 00:05:26.639
smaller than half a Meg with 4K transfers that's more than two full

00:05:24.280 --> 00:05:33.280
orders of magnitude faster than my old 10 hard drive solution but those

00:05:30.280 --> 00:05:36.080
right speeds aren't enough to saturate

00:05:33.280 --> 00:05:40.160
the planned 2x1 GB teamed network connection This Server is packing if

00:05:38.240 --> 00:05:45.479
multiple users are writing large files to the array either way raid zero wasn't

00:05:43.560 --> 00:05:48.880
my final configuration since I wanted some fault tolerance so I figured if I'm

00:05:47.440 --> 00:05:53.319
going to troubleshoot this thing I might as well do it when it's set up properly

00:05:51.319 --> 00:05:58.440
so I threw my eight Drive arrays in raid five that allows me to lose up to one

00:05:56.000 --> 00:06:02.680
drive per array and then I also have a spare drive on hand in the unlikely

00:06:00.440 --> 00:06:06.520
event of a failure which is lots for a server that'll be backed up nightly on

00:06:04.560 --> 00:06:10.000
the network then I striped those raid fives together in software for what is

00:06:08.400 --> 00:06:15.080
effectively raid 50 a quick Benchmark before the arrays

00:06:12.880 --> 00:06:19.000
were finished initializing revealed worse numbers than raid zero although

00:06:17.120 --> 00:06:23.319
that's pretty much a given since any parody raid puts much more load on the

00:06:21.199 --> 00:06:28.520
controller card especially for rights than a striping raid but I really hadn't

00:06:25.759 --> 00:06:32.199
expected them to be this bad so I waited for the arrays to finish initializing

00:06:30.759 --> 00:06:37.560
and they got worse so it was about that time that I

00:06:35.039 --> 00:06:41.440
realized maybe the right cash setting on solid state makes a bigger difference

00:06:39.360 --> 00:06:47.720
than on mechanical so even though I don't have battery backups for my cards

00:06:43.720 --> 00:06:53.160
or a ups for my server yet I enabled

00:06:47.720 --> 00:06:55.919
right back cash and there we go there is

00:06:53.160 --> 00:07:00.680
the drawback of an unexpected power loss causing potential data loss with right

00:06:57.919 --> 00:07:05.039
back caching enabled but we're just going to have to get those batteries and

00:07:02.199 --> 00:07:09.720
UPS's going because with that setting on we are able to saturate the bananas out

00:07:07.800 --> 00:07:15.280
of any connection we can make on the network to This Server when she's

00:07:12.319 --> 00:07:20.199
handling large streaming reads and wrs this array can do an excess of 5

00:07:17.520 --> 00:07:25.000
gigabytes per second when she's handling extremely small transactions she can

00:07:22.360 --> 00:07:28.680
still do just under a 100 times the performance of Ruskin and when she's

00:07:27.199 --> 00:07:32.479
able to queue up those small transactions from many clients hitting

00:07:30.720 --> 00:07:38.560
her at the same time she can do well over 500 megabytes per second I just

00:07:35.960 --> 00:07:42.919
need to drop another $600 on battery units for the raid cards and wait for

00:07:40.720 --> 00:07:49.000
the network cards for my clients to show up so that I can show you guys how the

00:07:45.159 --> 00:07:51.280
network is going to handle all of this

00:07:49.000 --> 00:07:56.479
man this server grade stuff is expensive and very timec consuming but it floats

00:07:53.639 --> 00:08:03.120
my geeky boat to see numbers like this where a PCI express-based Predator SS D

00:08:00.080 --> 00:08:05.479
is the bottleneck in a local file

00:08:03.120 --> 00:08:10.120
transfer speaking of stuff that floats my geeky boat I fix it you probably know

00:08:07.919 --> 00:08:14.879
I fix it from their tear Downs of electronic devices and their fantastic

00:08:13.159 --> 00:08:20.560
repair guides on their site that can save you tens 50s even hundreds of

00:08:18.560 --> 00:08:25.360
dollars on repair costs I've used them a number of times on an iMac on a phone

00:08:23.280 --> 00:08:28.400
and I'm sure there's something else but I'm not thinking of it at the moment

00:08:26.680 --> 00:08:32.080
what you probably aren't aware of is that I fix it Sals per professional

00:08:29.840 --> 00:08:36.599
grade tools as well so they've got their uh their iFixit 54-- bit driver kit

00:08:34.800 --> 00:08:41.000
they've got all these little prying tools they've got antistatic straps

00:08:38.919 --> 00:08:45.760
they've got their magnetic organizer that I actually I might have Andy

00:08:43.200 --> 00:08:49.040
yeah I was using this the other day that lets you write little labels draw little

00:08:47.440 --> 00:08:52.200
diagrams and keep all your screws somewhere safe when you're working on a

00:08:50.480 --> 00:08:56.000
project they've got all kinds of fantastic stuff whether you're trying to

00:08:53.560 --> 00:09:00.320
take apart a Nintendo DS with a tri-wing bit whether you're trying to take apart

00:08:57.480 --> 00:09:04.320
McDonald's toys with a triangle bit or you need to take apart something that

00:09:01.519 --> 00:09:08.200
uses security Tores all that stuff they've got it and what's cool is when

00:09:06.600 --> 00:09:12.480
you go on their guides they actually list all of the tools that you need for

00:09:10.440 --> 00:09:17.360
a particular guide the one to probably start with though is the kind of

00:09:14.519 --> 00:09:22.560
all-in-one protect toolkit pack I use mine all the time it's 65 bucks and if

00:09:20.640 --> 00:09:28.200
you use ifixit.com Linus and then code Linus 05 at the

00:09:25.640 --> 00:09:32.720
checkout you save $10 off that or any purchase of $50 or more so that's

00:09:30.040 --> 00:09:36.920
ifixit.com Linus check it out great tools great guides great

00:09:34.800 --> 00:09:39.440
stuff so that's pretty much it guys thanks for watching like the video if

00:09:38.200 --> 00:09:44.640
you liked it dislike it if you thought it sucked leave a comment preferably at the link below to our Forum if you want

00:09:42.640 --> 00:09:48.200
to discuss it also linked below you can buy a cool t-shirt like this one you can

00:09:46.640 --> 00:09:51.680
give us a monthly contribution if you think what we're doing is important you

00:09:50.000 --> 00:09:55.399
can change your Amazon bookmark to one with our affiliate codes so next time

00:09:53.040 --> 00:09:58.800
you buy 24 ssds we'll get a kickback from that um and that's pretty much it

00:09:57.440 --> 00:10:03.519
don't forget to subscribe and follow and all that good stuff thanks again for

00:10:00.519 --> 00:10:03.519
watching
