WEBVTT

00:00:00.399 --> 00:00:08.120
way back before the Big Move I built a

00:00:04.920 --> 00:00:11.519
100 plus terabyte storage server to

00:00:08.120 --> 00:00:13.759
replace the awful store data on

00:00:11.519 --> 00:00:18.240
disconnected drives on a shelf in the bathroom system that we were rocking

00:00:16.119 --> 00:00:22.640
before thanks Seagate for the awesome drives and thanks 45 drives.com for the

00:00:20.640 --> 00:00:28.400
Rock and personalized storinator server but some of you may have noticed that I

00:00:25.640 --> 00:00:33.399
never followed up on the performance testing that I promised to do on that

00:00:30.359 --> 00:00:36.840
machine I was supposed to be showing off

00:00:33.399 --> 00:00:40.399
1 Gigabyte per second transfers with the

00:00:36.840 --> 00:00:44.290
10 GB network setup what gives well

00:00:40.399 --> 00:00:49.289
today we finally get the whole

00:00:51.840 --> 00:00:58.640
story the master case 5 by Cooler Master gives you the freedom to truly make your

00:00:56.399 --> 00:01:01.640
midtower PC case your own with a variety of modular parts and access

00:01:00.519 --> 00:01:07.240
check out the link in the video description to learn more so the short

00:01:04.040 --> 00:01:09.439
version is this in spite of 45 drives

00:01:07.240 --> 00:01:16.200
telling me that they had customers with similar configs saturating a 10 GB link

00:01:12.520 --> 00:01:19.080
or more I couldn't even get half of that

00:01:16.200 --> 00:01:22.880
and it made no sense really I did a lot of tinkering with this box before

00:01:20.880 --> 00:01:27.159
eventually deploying it different network cards different Drive

00:01:24.759 --> 00:01:33.680
configurations and finally got to the point where it was whether freas

00:01:29.960 --> 00:01:35.479
Hardware or pbac I had to roll it out

00:01:33.680 --> 00:01:39.360
because we needed to put our data somewhere and I was just going to have

00:01:37.159 --> 00:01:46.880
to live with the results that I got I mean I know I know poor lonus only has

00:01:42.840 --> 00:01:50.439
300 to 350 megabyte per second speeds to

00:01:46.880 --> 00:01:53.920
his over 100 terabytes of safe storage

00:01:50.439 --> 00:01:56.320
boohoo but this disrupted my plans for

00:01:53.920 --> 00:02:01.240
our storage infrastructure in a bigger way than you might think in addition to

00:01:58.880 --> 00:02:06.240
archiving old stuff to the server my intention was to have our daily use NZ

00:02:04.280 --> 00:02:11.560
the SSD one that you probably remember from this video doing nightly syncs or

00:02:09.440 --> 00:02:16.040
even hourly checkpoints if we could get away with it so we'd have two full

00:02:13.720 --> 00:02:22.400
copies of all of our mission critical data so I wanted the magnetic NZ to be

00:02:19.319 --> 00:02:24.319
fast enough to handle that and any

00:02:22.400 --> 00:02:31.560
random data that our editors needed to read from it from old projects which we

00:02:27.519 --> 00:02:34.440
were not able to do so while I've had

00:02:31.560 --> 00:02:40.080
four months to diagnose this and Ponder what could be wrong because it's had up

00:02:36.840 --> 00:02:42.519
to 60 terabytes of important data on it

00:02:40.080 --> 00:02:52.560
with nowhere else to offload that I've had no choice but to just Lim along at

00:02:46.360 --> 00:02:52.560
300 350 megabytes per second until

00:02:52.879 --> 00:02:59.280
today Seagate sent us 35 of their new 8

00:02:57.040 --> 00:03:04.799
terabyte Enterprise capacity drives and no these are not the shingled platter

00:03:02.080 --> 00:03:09.120
archival ones these are rocking ass capable of well in excessive 200

00:03:06.879 --> 00:03:12.560
megabytes per second transfer speeds rated at 2 million hours meantime

00:03:11.159 --> 00:03:19.879
between failure and with a 5year warranty to back it up proper Enterprise

00:03:16.879 --> 00:03:21.440
capacity drives so I immediately tore

00:03:19.879 --> 00:03:25.879
them out of their packaging and began building pyramids that no just kidding

00:03:24.360 --> 00:03:29.959
well actually okay I did build pyramids but but what I actually built with them

00:03:27.519 --> 00:03:36.200
after the pyramids was two additional servers each to hold a copy of our 60

00:03:33.200 --> 00:03:38.200
terabytes of data while I worked on the

00:03:36.200 --> 00:03:42.000
Vault so one of those machines is actually eventually going to be a Nas

00:03:39.720 --> 00:03:46.680
unit at my house and the other one is going to be an off-site backup server

00:03:44.120 --> 00:03:51.720
for this puppy but each of those will get their own videos later so with the

00:03:49.159 --> 00:03:57.079
data safely stored on a hardware raid 6 and on a software butter FS raid 5 each

00:03:54.920 --> 00:04:03.319
of those transfers took over a day by the way I wiped the frez and be began

00:04:00.120 --> 00:04:05.840
trying things so first I tried six Drive

00:04:03.319 --> 00:04:13.480
vevs since that's a more optimal number for ZFS 2 nope still shoddy transfer

00:04:09.239 --> 00:04:16.919
speeds next I tried 10 Drive vevs no

00:04:13.480 --> 00:04:21.239
difference again finally in desperation

00:04:16.919 --> 00:04:24.120
I tried a 27 Drive raid zero an

00:04:21.239 --> 00:04:30.199
experimental class configuration that no one should trust to hold any data no

00:04:26.720 --> 00:04:32.840
matter how amazing the drives are and

00:04:30.199 --> 00:04:38.240
same thing which after talking to the folks at 45 drives about my findings

00:04:35.160 --> 00:04:40.919
revealed that the issue is probably a

00:04:38.240 --> 00:04:45.160
software one because they've seen NFS shares just fly in a similar

00:04:43.440 --> 00:04:50.440
configuration to mine which doesn't do me any good because this is a Windows

00:04:47.240 --> 00:04:52.479
environment and we need SMB shares and

00:04:50.440 --> 00:04:55.639
so I had to keep investigating because if I'm going to be running around saying

00:04:53.919 --> 00:04:59.680
this Nas unit in these drives are capable of over a Gigabyte per second of

00:04:57.520 --> 00:05:02.680
transfer speed we use them here at l Media Group I mean I'm basically

00:05:01.000 --> 00:05:09.360
endorsing the things it's not good enough to me for 45 drives to see it in

00:05:06.320 --> 00:05:11.720
their lab I need to see it so I've been

00:05:09.360 --> 00:05:15.199
chatting a lot with the unraid guys ever since they helped us do the two Gamers

00:05:13.479 --> 00:05:19.280
one CPU project which you should definitely check out if you haven't

00:05:16.840 --> 00:05:25.720
already and they offered to spend some time configuring an experimental raid

00:05:22.440 --> 00:05:28.759
five butter FS array in un raid and

00:05:25.720 --> 00:05:32.440
tuning both the network settings as well

00:05:28.759 --> 00:05:34.960
as the SMB share settings so our initial

00:05:32.440 --> 00:05:39.479
test on a vanilla unraid server was frankly pretty ho hum actually fairly

00:05:37.199 --> 00:05:44.039
similar there's that poor SMB optimization outside of Windows

00:05:41.039 --> 00:05:46.360
platforms rearing its ugly head again

00:05:44.039 --> 00:05:51.639
some 4 kilobyte packet and jumbo frame tuning to the network card tuning of

00:05:48.440 --> 00:05:53.759
unraid networking configuration and boom

00:05:51.639 --> 00:06:00.080
that my friends is the cleanest 10 gigabit transfer that I've actually ever

00:05:57.039 --> 00:06:02.440
seen now not a lot of lime Tech

00:06:00.080 --> 00:06:05.880
customers are running 10 gig e but from their perspective I guess it's just

00:06:03.840 --> 00:06:11.360
valuable R&D for down the road when that gear becomes more common but I mean even

00:06:08.599 --> 00:06:15.960
then this is not the kind of config that most people will encounter even on

00:06:13.360 --> 00:06:20.319
unrated I actually don't intend to continue to run it like this uh butter

00:06:18.120 --> 00:06:26.000
FS raid five and raid six are both in the experimental stage but the good news

00:06:23.880 --> 00:06:30.080
here is that what I realized after running the slow freas configuration for

00:06:28.759 --> 00:06:36.400
so long was that generally speaking I don't need

00:06:33.160 --> 00:06:38.360
more than the 200 to 220 megabyte pers

00:06:36.400 --> 00:06:44.520
second transfer speeds that my individual drives are capable of in a

00:06:41.400 --> 00:06:46.560
normal unraid array and that the only

00:06:44.520 --> 00:06:51.560
thing that needs to be lightning fast performance-wise is the new footage and

00:06:49.360 --> 00:06:57.440
projects that we offload to it relatively little of which is created on

00:06:53.840 --> 00:06:59.080
a daily basis so we devised a new plan

00:06:57.440 --> 00:07:05.080
and to help us realize the new plan Kingston stepped up and offered to send

00:07:02.280 --> 00:07:13.160
us eight of their e50 Enterprise grade 480 gig ssds with power loss protection

00:07:09.639 --> 00:07:15.960
these drives will act as a 2 tbte RAID

00:07:13.160 --> 00:07:22.199
10 right cache that will be capable of the full 10 gbit transfer rate for fast

00:07:19.479 --> 00:07:27.720
updates throughout the day and that then flushes nightly to the hard drives when

00:07:25.120 --> 00:07:33.840
no one is using them all of this can be completely transparent to the user so

00:07:30.160 --> 00:07:35.639
the only time we'll ever see sub 1 GB

00:07:33.840 --> 00:07:41.400
per second transfers is when we're accessing cold data or when doing a

00:07:38.400 --> 00:07:44.080
massive dump of over 2 terabytes at a

00:07:41.400 --> 00:07:48.520
time another cool side note is that this might turn out to be a better way to

00:07:45.960 --> 00:07:52.919
leverage the extra horsepower that this op server is leaving on the table anyway

00:07:51.360 --> 00:07:58.319
because she never touches more than about 20% CPU usage so I could take a

00:07:56.240 --> 00:08:04.440
couple of cores and turn them into a network rendering box or game server or

00:08:01.360 --> 00:08:06.400
something else and on the subject then

00:08:04.440 --> 00:08:12.440
of our server being op I guess that brings us to the conclusion it turns out

00:08:08.720 --> 00:08:15.080
that the hardware is but SMB shares on

00:08:12.440 --> 00:08:19.520
non Windows platforms take some tuning and optimization that if you're willing

00:08:17.599 --> 00:08:23.120
to endure the dense documentation and condescending attitude of the freenas

00:08:21.199 --> 00:08:26.800
community you could probably achieve there but instead I ended up working

00:08:25.199 --> 00:08:32.080
directly with lime Tech to have baked into an upcoming release of unraid 6 and

00:08:28.759 --> 00:08:34.440
I'm super happy with the new

00:08:32.080 --> 00:08:38.440
config if you're building a mobile app and searching for a simple payment

00:08:36.479 --> 00:08:44.039
solution you might want to check out brain tree with the brain tree v.0 SDK

00:08:41.760 --> 00:08:49.160
which is just one small snippet of code you can be all set up and ready in less

00:08:46.760 --> 00:08:51.920
than 10 minutes to take online payments and if you're having any trouble they

00:08:50.240 --> 00:08:55.399
have support staff ready to walk you through the process over the phone if

00:08:53.640 --> 00:09:00.800
you need them their code supports Android iOS and JavaScript clients and

00:08:58.440 --> 00:09:05.839
they have sdks in in seven programming languages and it makes it easy to offer

00:09:03.279 --> 00:09:12.920
multiple Mobile payment types including PayPal Apple pay Bitcoin venmo cards and

00:09:10.440 --> 00:09:15.760
more all with a simple integration they've got quick knowledgeable

00:09:14.519 --> 00:09:19.320
developer support if you have any questions and to learn more all you got

00:09:17.920 --> 00:09:25.200
to do is go over to Braintree payments.com Linus and if you use that link you can

00:09:23.600 --> 00:09:30.240
also get your first $50,000 in transactions with no fees

00:09:28.640 --> 00:09:34.519
whatsoever so check it out today at the link in the video description so thanks

00:09:32.399 --> 00:09:38.560
for watching guys if this video sucked you know what to do but if it was

00:09:36.040 --> 00:09:43.839
awesome get subscribed hit the like button like button hit the like button

00:09:42.200 --> 00:09:48.040
or even consider supporting us directly by using our affiliate code to shop at

00:09:46.320 --> 00:09:51.480
Amazon instructions for which we're up here buying a cool shirt like this one

00:09:49.880 --> 00:09:55.680
or with a direct monthly contribution through our form it gets you a little contributor tag now that you're done

00:09:54.640 --> 00:10:00.920
doing all that stuff you're probably wondering what to watch next so click that little button in the top right

00:09:58.720 --> 00:10:06.760
corner to check out the ultimate showdown between AMD's 8 core CPU and

00:10:04.200 --> 00:10:12.120
Intel's 8 core server anyway it's complicated you guys should check it out

00:10:08.160 --> 00:10:12.120
the video is there
