WEBVTT

00:00:00.000 --> 00:00:05.840
If you find TechLink to be an enrichment in your life, I invite you to write a kind little comment

00:00:05.840 --> 00:00:11.680
below. When I'm all snuggled up in bed tonight, I'll read some of them to my cat. Then he'll have to

00:00:11.680 --> 00:00:18.560
respect me. In the wake of an Oxford study that suggested barely anyone uses AI tools on a daily

00:00:18.560 --> 00:00:23.920
basis, three major AI companies have rolled out feature updates for their chatbots in an attempt

00:00:23.920 --> 00:00:29.600
to make them useful for someone other than coders and tech bros left over from the crypto bubble.

00:00:29.680 --> 00:00:35.280
OpenAI's ostensibly less evil cousin, Anthropic, announced tool use for Claude,

00:00:35.280 --> 00:00:41.520
allowing it to perform tasks and manipulate data. AI search engine Perplexity announced pages,

00:00:41.520 --> 00:00:47.360
which will create research reports on any topic that can be customized, shared, or presumably

00:00:47.360 --> 00:00:53.680
handed into your teacher. And OpenAI just gave free chat GPT users the ability to upload images

00:00:54.480 --> 00:01:00.320
and files and use custom GPTs. Those features were previously paywalled, which, as this fan points

00:01:00.320 --> 00:01:15.600
out, obviously points towards something special coming for paying users. OpenAI also just announced

00:01:15.600 --> 00:01:23.200
chat GPT EDU, which basically gives schools an education discount to convince them to use it,

00:01:23.200 --> 00:01:28.400
and it comes on the heels of a report claiming the company finally signed that rumored Apple deal

00:01:28.400 --> 00:01:33.840
to put their AI tech in the next iPhone. That same report claims Sam Altman has discussed

00:01:33.840 --> 00:01:40.960
transforming OpenAI from a wacky capped profit controlled by a non-profit into either a B

00:01:40.960 --> 00:01:46.720
corporation, allowed to care about something other than benefiting shareholders, or a regular old

00:01:46.720 --> 00:01:52.160
for-profit company. But it's only because that might be necessary to achieve a utopia designed by

00:01:52.160 --> 00:01:56.640
Sam's machine god and populated by sentient robots after they've killed us all off, okay?

00:01:58.000 --> 00:02:03.040
Trust him, it's worth it. TikTok is working on a modified version of its recommendation algorithm

00:02:03.040 --> 00:02:09.520
that would not share any backend with its Chinese counterpart, Duyen, and thus potentially save

00:02:09.520 --> 00:02:15.120
the app from being banned in the US. That's all according to insider sources who spoke to Reuters,

00:02:15.120 --> 00:02:21.680
but not according to TikTok, who immediately tweeted that the report is misleading and inaccurate.

00:02:21.680 --> 00:02:27.680
But seemed to be referring specifically to the claim that a US-centric algorithm

00:02:27.680 --> 00:02:33.920
would enable ByteDance to sell TikTok. They didn't explicitly deny they're working on such an

00:02:33.920 --> 00:02:38.880
algorithm. In fact, Crystal Hu, the report's co-author, says that when she reached out for

00:02:38.880 --> 00:02:45.520
comment on the story, TikTok told her the info was correct. To be fair, they probably didn't

00:02:45.520 --> 00:02:49.680
read the whole thing before getting bored and swiping up. But even if TikTok survives in the

00:02:49.680 --> 00:02:54.800
US, it may have to deal with upcoming bills like this one from New York's governor that could

00:02:54.800 --> 00:03:00.240
ban smartphones in schools. And then how will kids ignore their history class to watch another

00:03:00.240 --> 00:03:12.720
kid tell them that Helen Keller wasn't real? On TikTok? Yeah, Google has confirmed that this

00:03:12.720 --> 00:03:17.680
week's massive document leak revealing key details about how its search algorithm works

00:03:17.680 --> 00:03:23.120
is real after taking two days to cover their ears and pretend everything's fine as required by law.

00:03:23.120 --> 00:03:28.080
The tech giant didn't get into specifics. They only cautioned against making inaccurate assumptions

00:03:28.080 --> 00:03:33.600
about search based on out-of-context, outdated, or incomplete information, suggesting that the

00:03:33.600 --> 00:03:38.480
public trust Googles demonstrably inaccurate public statements instead. Google also took a

00:03:38.480 --> 00:03:43.840
moment to put up a blog post explaining why Google's search's new AI overviews told people it was

00:03:43.840 --> 00:03:49.440
okay to put glue in pizza sauce and eat rocks last week. Step one, it's good for you. It gave

00:03:49.440 --> 00:03:55.280
those answers simply because prior to those screenshots going viral, practically no one

00:03:55.280 --> 00:04:00.800
asked Google how many rocks they should eat. How would Google's search know that it wasn't okay

00:04:00.800 --> 00:04:06.720
to eat one little rock if it had never thought about it before? Google says AI overviews cited

00:04:06.720 --> 00:04:12.560
a satirical article recommending eating rocks because of a data void issue. There's just hardly

00:04:12.560 --> 00:04:17.360
any info on the web about whether eating rocks is okay, formatted as an answer to the question of

00:04:17.360 --> 00:04:26.400
how many rocks one should eat. We can all chuckle at this understandable error. It's quick bits time.

00:04:28.080 --> 00:04:32.800
Why haven't you written a comment yet? Spotify has announced that they will be giving refunds to

00:04:32.800 --> 00:04:38.240
customers who bought car thing. After the company announced they would be remotely bricking all

00:04:38.240 --> 00:04:44.880
car things, they initially offered purchasers nothing. Some who complained were offered a

00:04:44.880 --> 00:04:49.440
few months of Spotify premium, but wouldn't you know it? The company suddenly started offering

00:04:49.440 --> 00:04:55.680
refunds not long after a class action lawsuit was filed accusing Spotify of misleading customers by

00:04:55.680 --> 00:05:00.480
selling a product that would quickly be rendered unusable. But are they doing anything about all

00:05:00.480 --> 00:05:06.960
the e-waste they're creating? Has the ocean or any of the fish hired a lawyer? I didn't think so.

00:05:07.040 --> 00:05:12.160
Say you're a fish. Prove it. Google Cloud has released a public explanation for how it wound

00:05:12.160 --> 00:05:17.760
up accidentally deleting a customer account two weeks ago. Namely, somebody on Google's end

00:05:17.760 --> 00:05:23.840
accidentally left a parameter blank. Now that might not sound like an issue worthy of an entire

00:05:23.840 --> 00:05:29.440
blog post from a multi-billion dollar company, but the customer in question was Unisuper,

00:05:29.440 --> 00:05:36.240
an Australian pension fund charged with managing 135 billion Australian dollar-y-dos.

00:05:37.120 --> 00:05:41.760
On the bright side, the accounts have been successfully restored. On the less bright side,

00:05:41.760 --> 00:05:46.800
apparently your life savings can be wiped out in an instant if a dude named Robert Droptable

00:05:46.800 --> 00:05:55.520
decides to sign up for a 401k. The internet archive, the organization that runs the Wayback

00:05:55.520 --> 00:06:00.880
Machine, was hit with a several-day-long DDoS attack involving tens of thousands of fake

00:06:00.880 --> 00:06:05.360
information requests per second, according to the Archives Director of Library Services.

00:06:05.360 --> 00:06:10.640
While the exact motivation for the attack is unknown, a group called SN Blackmetta has taken

00:06:10.640 --> 00:06:15.600
credit for the assault by posting what appears to be the profile picture of an edgy 14-year-old

00:06:15.600 --> 00:06:20.400
from the early 2000s. Who could possibly understand what's going on in his dark mind?

00:06:20.480 --> 00:06:27.920
AI company The Simulation has announced ShowRunner, an app that wants to be the Netflix of AI,

00:06:27.920 --> 00:06:34.480
with AI-generated original series. The TV shows are pretty creative. There's one about super

00:06:34.480 --> 00:06:40.560
intelligent AI devices, a family drama about two siblings that create an AI version of their dead

00:06:40.560 --> 00:06:45.920
mom. There's even a horror anime about a future where humans choose to physically augment themselves

00:06:45.920 --> 00:06:52.480
with AI technology. Even worse, they want watchers of these shows to generate their own episodes.

00:06:52.480 --> 00:06:57.760
Yes, because the thing that will surely make AI-generated TV shows better is fan fiction.

00:06:57.760 --> 00:07:02.560
And a creator on Chinese video site Billy Billy has gone viral on social media,

00:07:02.560 --> 00:07:07.600
thanks to an incredibly intricate custom tank simulator for World of Tanks,

00:07:07.600 --> 00:07:13.760
using soda bottles for shells, MetaQuest 2 as a periscope, a Logitech driving wheel,

00:07:13.760 --> 00:07:19.840
and some kind of crank situation to control the main gun. It looks awesome. And frankly,

00:07:19.840 --> 00:07:24.080
if we could take all the resources currently being used to generate AI slop for the internet

00:07:24.080 --> 00:07:28.960
and build more stuff like this, I wouldn't have to fish for nice comments to read to my cat to be

00:07:28.960 --> 00:07:34.720
happy. I was joking. I don't have a cat. What I actually require to be happy is that you come back

00:07:34.720 --> 00:07:39.760
on Monday for more tech news, okay? Or I will cry. No pressure.
