1
00:00:00,560 --> 00:00:07,600
welcome to lus Tech tips at CES 2013 our trip to the show this year is powered by

2
00:00:05,279 --> 00:00:11,840
Corsair maker of quality PC components and peripherals our trusted storage

3
00:00:09,639 --> 00:00:15,280
partner is Seagate technology and our trusted networking partner is

4
00:00:16,359 --> 00:00:24,000
linkis in the ocz Suite on the consumer side we start with the vertex 3 so this

5
00:00:21,400 --> 00:00:27,439
is a sand Force based Drive honestly the problem with something like a San Force

6
00:00:25,519 --> 00:00:31,320
based drive at this point in time is that pretty much everyone in their dog

7
00:00:29,480 --> 00:00:36,520
has a s Force drive so you just have to rely on add-ons like extra warranty or

8
00:00:35,200 --> 00:00:42,520
you know just that sort of the the General brand perception in order to

9
00:00:38,320 --> 00:00:44,320
drive sales so Vertex 4 was an obvious

10
00:00:42,520 --> 00:00:48,320
an obvious thing for ocz to do after they acquired indel links it does use a

11
00:00:46,640 --> 00:00:52,559
thirdparty controller the actual physical controller on the chip but it

12
00:00:50,559 --> 00:00:57,600
uses an IND linkx firmware so they're calling that the Everest 2 platform and

13
00:00:54,719 --> 00:01:01,199
it achieves performance that in almost any case is as good as the vertex 3 but

14
00:01:00,120 --> 00:01:06,680
in cases where you're dealing with incompressible data is actually better so it's a more consistent experience

15
00:01:05,040 --> 00:01:09,759
right here we have the latest drive that ocz's launched so these two are going to

16
00:01:08,400 --> 00:01:14,759
be the ones that are more focused on moving forwards with their indel linkx

17
00:01:11,960 --> 00:01:19,759
technology inside this one not only uses an indel linkx firmware to control the

18
00:01:17,600 --> 00:01:24,400
SSD itself but it also has the Barefoot 3 controller which is the first silicon

19
00:01:22,520 --> 00:01:28,119
that was actually designed inhouse by ocz to deliver the best possible

20
00:01:26,439 --> 00:01:31,280
performance this drive took something along the lines of 18 months to bring to

21
00:01:29,720 --> 00:01:36,159
Market market so they were already working on this 6 months before CES last

22
00:01:34,280 --> 00:01:39,960
year it has a 5-year warranty and it has industry-leading performance it is right

23
00:01:38,119 --> 00:01:46,200
up there with any other 2 and 1 half inch SSD drive because that is pretty

24
00:01:43,520 --> 00:01:51,360
much as good as we can do for a 2 half inch SATA 3 drive at this point because

25
00:01:49,240 --> 00:01:55,320
the SATA interface is getting pretty close to the limit so let's talk about

26
00:01:53,600 --> 00:01:59,520
what we can do if we move beyond the SATA interface now we've seen Revo Drive

27
00:01:57,880 --> 00:02:02,439
products from ocz in the past and they're doing away with that branding

28
00:02:00,960 --> 00:02:06,360
this time or maybe they're not it looks like there's a oh yeah specifications

29
00:02:04,520 --> 00:02:11,319
are preliminary and subject to change so this is the vector PCIe this is going to

30
00:02:09,000 --> 00:02:16,640
use two indel linkx Barefoot 3 controllers it uses 32 nand chips it

31
00:02:15,000 --> 00:02:22,800
comes with cloning software it has a 5year warranty it's going to be available in capacities up to one

32
00:02:19,480 --> 00:02:25,000
terabyte and it uses a PCI Express Gen 2

33
00:02:22,800 --> 00:02:30,120
4X interface giving it a theoretical maximum bandwidth of 2 GB per second now

34
00:02:28,319 --> 00:02:33,400
in the real world it's capable of achieving about 1 gab per second and

35
00:02:32,080 --> 00:02:39,120
you're going to see that on the screen to my right your left but there's a lot

36
00:02:37,120 --> 00:02:43,560
of overhead involved in the PCI Express interface so really to get ahead of this

37
00:02:42,159 --> 00:02:48,560
in terms of performance they'd either have to add more Lanes making it an adex

38
00:02:45,519 --> 00:02:50,120
card or move to PCIe gen 3 now with

39
00:02:48,560 --> 00:02:54,519
these storage devices there's a lot of validation that goes into them because

40
00:02:52,200 --> 00:02:58,480
most PCI Express slots on motherboards on the motherboard manufacturer side are

41
00:02:56,519 --> 00:03:02,000
only really validated with mainstream stuff like graphics cards and sound

42
00:03:00,200 --> 00:03:06,560
cards so in the past there have been some finicky issues so moving ahead to

43
00:03:04,840 --> 00:03:10,720
the newest technology that at this point is only even supported on Intel and not

44
00:03:08,480 --> 00:03:15,239
even on the AMD side might not have made sense and honestly looking at previous

45
00:03:13,400 --> 00:03:18,560
generation products you didn't see the same kind of scaling going from two

46
00:03:16,760 --> 00:03:23,360
controllers to four controllers that you saw going from 1 to two So speaking of

47
00:03:21,560 --> 00:03:27,720
the performance scaling from 1 to two you can see that compared to what a

48
00:03:24,799 --> 00:03:31,519
single Drive is able to do on the SATA 3 interface we're able to see consist

49
00:03:29,760 --> 00:03:36,680
consant performance above sort of anything above about 32 kilobytes where

50
00:03:34,760 --> 00:03:40,760
you're sitting around well still above 800 megabytes per second reads and wrs

51
00:03:38,959 --> 00:03:45,640
this drive can consistently deliver about a th000 megabytes per second reads

52
00:03:43,680 --> 00:03:50,200
and rights so this is comparable to something that I had to build for myself

53
00:03:47,680 --> 00:03:55,040
using an LSI card that cost me about 700 bucks and back in the San Force One days

54
00:03:52,959 --> 00:03:58,680
I needed eight drives to achieve that kind of performance so we're only a few

55
00:03:57,200 --> 00:04:02,040
Generations ahead of that but we're already looking at performance that's

56
00:04:00,239 --> 00:04:06,760
basically space age compared to what we had not that long ago in our Vector SSD

57
00:04:05,040 --> 00:04:10,920
unboxing we talked about how OC is changing their image redefining their

58
00:04:09,159 --> 00:04:16,280
processes internally and trying to refocus right now so let's have a look

59
00:04:13,400 --> 00:04:20,079
at sort of the the existing generation of Enterprise ssds I don't know if you

60
00:04:18,040 --> 00:04:23,759
guys are familiar with these or not but this one was actually at CES last year

61
00:04:22,000 --> 00:04:27,440
already it's available in up to 3.2 terabyte capacities and I know the

62
00:04:25,320 --> 00:04:32,680
internet loves to talk about sort of the one tbte drives that are out are coming

63
00:04:30,199 --> 00:04:37,880
out or but the reality of it is ocz's had things like you know 3.2 terab

64
00:04:34,720 --> 00:04:40,560
drives on PCIe 800 terab drives using a

65
00:04:37,880 --> 00:04:46,280
sass interface on uh on 2 and a half inch form factor for actually quite a

66
00:04:42,320 --> 00:04:47,720
while now so it's um it's interesting I

67
00:04:46,280 --> 00:04:51,320
mean this is this is all that stuff that we've seen before but I have Jerome here

68
00:04:49,680 --> 00:04:56,320
from ocz to tell us about something that's actually all new and uses unlike

69
00:04:54,680 --> 00:05:00,639
these previous generation Solutions which are using sandforce controllers

70
00:04:58,240 --> 00:05:05,759
sandforce uh sand Force driven kind of firmware Updates this guy right here is

71
00:05:02,919 --> 00:05:10,199
actually using ocz's own intellectual property so tell us about the Intrepid 3

72
00:05:08,039 --> 00:05:14,160
right so thanks so this is our intrepret 3 product this is our next Generation

73
00:05:11,800 --> 00:05:17,320
SATA product so as you mentioned this is using our in-house controller our

74
00:05:15,720 --> 00:05:20,840
Everest 2 controller and this is going to be an evolution of our existing set

75
00:05:19,000 --> 00:05:24,400
of products like our Geneva 2 so with Intrepid 3 you're going to get higher

76
00:05:22,240 --> 00:05:28,280
performance for sequential also higher per performance for random input output

77
00:05:26,560 --> 00:05:31,600
operations per second and this one's also optimized for Inc compress ible

78
00:05:30,000 --> 00:05:34,160
data Thea 2 is optimized for compressible andreid 3 is going to give

79
00:05:33,240 --> 00:05:42,479
you really great performance with incompressible data so these two solutions are potentially complimentary

80
00:05:38,039 --> 00:05:43,919
to each other yes that's okay so I mean

81
00:05:42,479 --> 00:05:47,160
it's there's a lot of guys that make ssds so I think the differentiation

82
00:05:46,000 --> 00:05:50,960
really comes from a few different factors so number one is the variety of

83
00:05:49,400 --> 00:05:55,360
the solutions number two is going to be the support that's provided number three

84
00:05:53,319 --> 00:06:00,000
quality of the components that are being used and number four is going to be sort

85
00:05:58,400 --> 00:06:03,280
of how you guys differentiate eles in the market and sort of that that X

86
00:06:01,520 --> 00:06:07,000
Factor thing so this is something we discussed on our live stream we do live

87
00:06:04,919 --> 00:06:12,319
streams every Friday night uh when ocz announced this but ocz has a solution

88
00:06:09,840 --> 00:06:18,000
now for accelerating volumes whether it's using a PCIe solution a SATA

89
00:06:15,240 --> 00:06:22,080
solution a SAS Solution on Linux platforms as well as uh what other

90
00:06:20,039 --> 00:06:24,360
platforms so why don't you tell us uh we're going to wait for this demo to

91
00:06:23,160 --> 00:06:29,240
restart at the beginning and we're going to get drum to walk us through it okay

92
00:06:26,840 --> 00:06:32,560
so here we're showing ocz's new solution for acceleration this is our lxl

93
00:06:31,199 --> 00:06:35,759
platform so what we're doing here is we're showing in our Storage Pro XEL

94
00:06:34,199 --> 00:06:41,680
management software you can see we have the neeva and a z Drive uh ocz volumes

95
00:06:39,800 --> 00:06:45,840
uh installed and what we're doing here is we're selecting the existing volumes

96
00:06:43,840 --> 00:06:49,880
that you want to accelerate so we're picking the ocz volumes here as cache

97
00:06:47,520 --> 00:06:54,000
volumes and our lxl software automatically you know uh

98
00:06:56,520 --> 00:07:01,280
prediscovery done so far is we've already selected the ocz volumes to be

99
00:06:59,759 --> 00:07:04,400
used as cach now we're selecting the volumes to be accelerated and here you

100
00:07:03,000 --> 00:07:09,520
can see we're selecting the policies to be used for the acceleration so essentially these will tell you what

101
00:07:07,759 --> 00:07:12,879
data to put in the cache what's the hot data so we have some preconfigured

102
00:07:11,560 --> 00:07:15,879
algorithms as well as you can select custom algorithms so now there we've

103
00:07:14,720 --> 00:07:19,879
selected the volumes now they're accelerated and we're going to move over

104
00:07:18,080 --> 00:07:23,479
you can see in the summary all the four volumes that were installed are now

105
00:07:21,319 --> 00:07:27,919
accelerated with the ocz deneva and the ocz uh to be clear guys the volumes that

106
00:07:25,960 --> 00:07:32,840
are being accelerated here are going to be mechanical volumes that we're assign

107
00:07:30,440 --> 00:07:36,840
that uh an SSD volume is then being assigned to to Cache now I can tell you

108
00:07:35,440 --> 00:07:44,080
guys right now looking at the interface for this assuming it's going to work this way in the final model check this

109
00:07:40,039 --> 00:07:47,720
out guys 15 uh that

110
00:07:44,080 --> 00:07:49,479
15, 15 15,000 total iops per second

111
00:07:47,720 --> 00:07:52,599
which is much better than you can do with any mechanical volume as we're

112
00:07:50,720 --> 00:07:55,800
about to see when they actually turn the volume acceleration off this is much

113
00:07:54,560 --> 00:07:59,680
easier than what I've seen in implementations from LSI and adaptech in

114
00:07:58,000 --> 00:08:04,560
their RAID storage managers and not nearly as restrictive because you can

115
00:08:01,720 --> 00:08:08,400
take SATA drives you can take PCIe drives you can Clump them together you

116
00:08:06,560 --> 00:08:13,159
can separate them apart so you could say you got a 1.6 terabyte Revo drive you go

117
00:08:11,039 --> 00:08:17,520
I want one terabyte for dedicated SSD storage I want 600 gigabytes that's

118
00:08:15,759 --> 00:08:21,800
going to actually cash a mechanical volume I have somewhere else on the

119
00:08:19,360 --> 00:08:26,039
server um this is extremely exciting because up until now there's been no

120
00:08:24,039 --> 00:08:30,280
real caching solution for SSD available on Linux at all in spite of that being

121
00:08:27,759 --> 00:08:35,039
where most of this most of the server data is

122
00:08:31,640 --> 00:08:36,159
actually dealt with um now is this just

123
00:08:35,039 --> 00:08:40,640
going to be Linux or you guys going to have other Solutions as well we're going to have other Solutions as well we

124
00:08:38,880 --> 00:08:44,080
already have a solution for VMware it's called VXL the Linux solution is called

125
00:08:42,560 --> 00:08:49,360
lxl as we've discussed and we're going to also have a Windows solution which will be called wxl thanks so much Jerome

126
00:08:48,000 --> 00:08:53,640
this has been very helpful and don't miss any of our CES 2013 coverage here

127
00:08:51,720 --> 00:09:01,279
at the show and as always don't forget to subscribe to L Tech tips powered by

128
00:08:56,000 --> 00:09:01,279
Corsair seate technology and linkis
