WEBVTT

00:00:00.000 --> 00:00:08.880
Sorry I haven't been hosting for a while guys, I was busy growing this mustache.

00:00:08.880 --> 00:00:15.840
Reviews are up for the GeForce RTX 5080, NVIDIA's latest AI product formerly known as the GPU,

00:00:15.840 --> 00:00:20.400
and for many, including Linus Tech Tip Sebastian, it's falling a bit flat.

00:00:20.400 --> 00:00:26.120
In native rendering performance at 1440p, which is a lot of P, the RTX 5080 is roughly

00:00:26.120 --> 00:00:32.920
10% better than the 4080 Super and 7% better than AMD's RX 7900 XTX.

00:00:32.920 --> 00:00:37.280
Those leads increased to 20% and 10% respectively at 4K.

00:00:37.280 --> 00:00:42.200
But the 5080 is still well behind the 4090, thanks in no small part to NVIDIA keeping the

00:00:42.200 --> 00:00:50.000
VRAM limited to 16GB, while the 4090 has 24GB, and it does not like to share.

00:00:50.000 --> 00:00:54.280
This release is disappointing to enthusiasts who remember that the RTX 4080 outperformed

00:00:54.280 --> 00:01:00.640
the 3090 and 3090 Ti at launch, although NVIDIA gave it a $1,200 price tag, the 5080

00:01:00.640 --> 00:01:05.360
only costs $1,000, so it's missing a whole $200 worth of GPU juice apparently, but no

00:01:05.360 --> 00:01:09.800
matter how mid reviewers say the RTX 50 series cards are so far, they're still going to

00:01:09.800 --> 00:01:16.000
be hard to find. NVIDIA has warned of availability issues to significant demand, probably because people

00:01:16.000 --> 00:01:19.840
just love AI so much and the bubble is definitely not popping, can't do that.

00:01:19.840 --> 00:01:23.560
I didn't hear anything, it's solid. I'll be over here AI-ing guys.

00:01:23.560 --> 00:01:28.720
Speaking of which, there's a fresh new Chinese AI model to give US investors more panic attacks.

00:01:28.720 --> 00:01:34.800
E-commerce giant Alibaba has released Quen 2.5 Max, which the company says outperforms

00:01:34.800 --> 00:01:38.320
DeepSeq V3, although it can't be run locally.

00:01:38.320 --> 00:01:44.800
How are they doing this? Well after DeepSeq's own model indicated to redditors that it sometimes confuses itself

00:01:44.800 --> 00:01:49.960
with the chat GPT, OpenAI told the Financial Times, it has evidence that DeepSeq trained

00:01:49.960 --> 00:01:55.240
models on data generated by chat GPT, which was famously trained only on original text

00:01:55.240 --> 00:01:58.280
handwritten by Sam Altman, the manifesto we call it.

00:01:58.280 --> 00:02:02.720
It's artisanal. Microsoft says they're investigating these claims, in case you forgot, they're best

00:02:02.720 --> 00:02:07.000
friends with OpenAI, which we know because thanks to social media we can follow the friendships

00:02:07.000 --> 00:02:10.600
and rivalries of tech CEOs like their Minecraft YouTubers.

00:02:10.600 --> 00:02:15.280
Now as we said on Monday, the US stock market panicked in response to the release of DeepSeq

00:02:15.280 --> 00:02:23.160
AI's models. But does that make sense? I mean, there are some reports that a while DeepSeq's chatbots were trained on NVIDIA GPUs,

00:02:23.160 --> 00:02:28.480
when you use one on the web, now it's running on AI chips made by Huawei, which would give

00:02:28.480 --> 00:02:32.080
US investors some reason to be worried about NVIDIA's monopoly.

00:02:32.080 --> 00:02:36.080
However, whether these Chinese models are actually as cheap to train and use as DeepSeq

00:02:36.080 --> 00:02:43.560
claims is being debated by analysts. An anthropologist, Dario Amode, argues the fact Chinese companies have to turn to less

00:02:43.560 --> 00:02:47.600
powerful hardware is proof that American restrictions on the export of AI chips are

00:02:47.600 --> 00:02:53.520
working. But let's say, sure, DeepSeq is way more efficient, as explained by Sam Maltman, that doesn't

00:02:53.520 --> 00:02:58.320
mean AI companies are going to buy less hardware. Uh, that $500 billion server?

00:02:58.320 --> 00:03:01.320
Yeah. We could do 250. Keep it coming.

00:03:01.320 --> 00:03:05.120
I thought that was all the stories, but it turns out I saved a quick bit in my mustache.

00:03:05.120 --> 00:03:11.960
Mustache, mustache! Enough! Google has open-source PebbleOS, the operating system that powered Pebble smartwatches, which

00:03:12.200 --> 00:03:17.080
you may not be old enough to remember. They were bought by Fitbit, which was bought by Google in 2021.

00:03:17.080 --> 00:03:22.240
Well, the original founder of Pebble, and super gigachat, Eric Megakofsky, says the

00:03:22.240 --> 00:03:25.240
open-sourcening means he's bringing Pebble back.

00:03:25.240 --> 00:03:29.160
His new team is working on an open-source smartwatch with a focused core set of features that

00:03:29.160 --> 00:03:33.360
users can tinker with so they don't have to depend on companies to fix stuff like the

00:03:33.360 --> 00:03:36.960
Blue Triangle of Death that briefly afflicted Garmin wearables this week.

00:03:36.960 --> 00:03:39.960
I mean, Blue Triangle? That's not even a thing.

00:03:39.960 --> 00:03:47.280
They made it up. Next, Pink Tetrahedrons. U.S. President Donald Trump said in a speech on Monday that his government will place

00:03:47.280 --> 00:03:53.440
tariffs on the import of computer chips and semiconductors to return production of these

00:03:53.440 --> 00:04:00.800
essential goods to the United States. The Biden Administration's CHIPS Act tried to do this by planning to invest 52 billion

00:04:00.800 --> 00:04:05.700
dollars in domestic chip foundries. But Trump says they had it the wrong way around.

00:04:05.700 --> 00:04:14.680
Chip companies don't need money. They need an incentive to not pay what Trump says could be a 25, 50, even 100 percent tax.

00:04:14.680 --> 00:04:20.760
Wow, it's like he's here. And that's why the U.S. is collaborating with every country in the world to make sure

00:04:20.760 --> 00:04:25.200
they all place those tariffs too. That way, TSMC will have no choice.

00:04:25.200 --> 00:04:32.720
It's foolproof. Comcast is rolling out new tech in select cities that it says could reduce latency by 78 percent

00:04:32.720 --> 00:04:37.080
in what the telecom giant calls an ultra-low-leg connectivity experience.

00:04:37.080 --> 00:04:43.160
They put experience on the end and it makes it funny. The experience uses a tech standard that's been in the works for a while called L4S,

00:04:43.160 --> 00:04:47.560
which stands for low-latency, low-loss, scalable throughput, girl boss.

00:04:47.560 --> 00:04:52.720
L4S sounds like a Craigslist list. You tell me in the comments what it means.

00:04:52.720 --> 00:04:58.720
I'm an L, I'm looking for an S. If Comcast claims are true, this might make you happier with your existing internet, which

00:04:58.720 --> 00:05:03.400
is good because the new chair of the FCC is killing his predecessor's proposal, making

00:05:03.400 --> 00:05:06.720
it easier for renters to switch internet service providers.

00:05:06.720 --> 00:05:09.760
Of the one you're with? Is that so hard?

00:05:09.760 --> 00:05:16.240
No one commits anymore. Anacoder has published a new open-source tar pit tool that tackles the problem of AI training

00:05:16.240 --> 00:05:21.440
webcrawlers hogging websites resources by trapping those webcrawlers in an infinite, randomly

00:05:21.440 --> 00:05:29.720
generated maze of linked pages. The tool is called Nepentheeth, as in the biological name for a genus of carnivorous plants.

00:05:29.720 --> 00:05:33.640
The tool is just like those, except while tropical insects are scary and huge, they're

00:05:33.640 --> 00:05:36.920
not known for hitting the same servers a million times in 24 hours.

00:05:36.920 --> 00:05:40.840
Although whoever wrote this has clearly never been to Botswana.

00:05:40.840 --> 00:05:44.480
No botflies though, just keep your carapaces out of my skin.
