close
close

first Drop

Com TW NOw News 2024

Why I Can’t Stop Writing About Elon Musk | Technology
news

Why I Can’t Stop Writing About Elon Musk | Technology

“I “I hope I don’t have to cover Elon Musk for a while,” I thought last week after sending TechScape out to readers. Then I got a message from the news editor. “Can you keep an eye on Elon Musk’s Twitter feed this week?”

Finally, I read the book by the world’s most powerful postal addict carefully, and my brain liquefied and poured out of my ears:

His briefest overnight stop came on Saturday night, when he logged off after retweeting a meme comparing London’s police to the SS. Four and a half hours later, he was back online to retweet a crypto influencer complaining about jail sentences for Britons attending protests.

But somehow I was surprised by what I found. I knew the rough outlines of Musk’s internet presence from years of following him: a three-part slant between touting his real companies, Tesla and SpaceX; eager reposting of bargain-basement nerd humor; and increasingly right-wing political agitation.

Tracking Musk in real time, however, has revealed the ways in which his chaotic mode has been distorted by his rightward shift. His promotion of Tesla is increasingly being interpreted in terms of culture war, with the Cybertruck in particular being promoted with language that makes it sound as if buying one will help defeat the Democrats in the US presidential election in November. The bargain-basement nerd humor mentioned above is tinged with anger at the world for not thinking he’s the coolest person on the planet. And right-wing political agitation is increasingly extreme.

Musk’s involvement in the chaos in the UK seems to have pushed him further into the arms of the far right than ever before. This month, he first tweeted at Lauren Southern, a far-right Canadian internet personality best known in the UK for being given a visa ban by Theresa May’s government over her Islamophobia. More than just a tweet: he’s also been supporting her financially, sending her around £5 a month via Twitter’s subscription function. And then there was the headline-grabbing retweet of Britain First’s co-leader. On its own, that could have been chalked up to Musk not knowing which pond he was swimming in; two weeks later, the pattern is clearer. These are his people now.

Well that’s fine then

A nice example of the difference between scientific press releases and scientific papers, today from the AI ​​world. The press release, from the University of Bath:

AI does not pose an existential threat to humanity, new research shows.

LLMs have a superficial ability to follow instructions and excel in language skills, but they have no potential to master new skills without explicit instruction. This means that they remain inherently controllable, predictable and safe.

This means that they remain inherently controllable, predictable and safe.

The article by Lu et al:

It is claimed that large language models, consisting of billions of parameters and pre-trained on extensive web-scale corpora, acquire certain capacities without having been specifically trained for them … We present a novel theory that explains emergent capacities, taking into account their potential confounding factors, and rigorously support this theory through more than 1,000 experiments. Our findings suggest that putative emergent capacities are not really emergent, but result from a combination of in-context learning, model memory, and linguistic knowledge.

Our work is a fundamental step in explaining language model performance, providing a template for their efficient use and clarifying the paradox of their ability to excel in some cases and fail in others, thus demonstrating that their capabilities should not be overestimated.

The press release version of this story went viral, for predictable reasons: Everyone loves to watch the giants of Silicon Valley get skewered, and the existential risks of AI have become a divisive topic in recent years.

But the article is still a few steps away from the claim that the university’s press department wants to make about it. That’s a shame, because what the article says do show is interesting and important anyway. There is a lot of focus on so-called “emergent” abilities with frontier models: tasks and capabilities that did not exist in the training data, but that the AI ​​system demonstrates in practice.

These emerging capabilities are worrying for those concerned about existential risks, because they suggest that AI safety may be harder to guarantee than we would like. If an AI can do something it hasn’t been trained to do, there’s no easy way to guarantee that a future AI system will be safe: you can leave things out of the training data, but it might still figure out how to do it.

The paper shows that, in some situations, those emergent skills aren’t so at all. Instead, they’re a result of what happens when you take an LLM like GPT and hammer them into the shape of a chatbot, before asking them to solve problems in the form of a question-and-answer conversation. That process, the paper suggests, means that the chatbot can never really be asked “zero-shot” questions, where it has no prior training: ChatGPT’s art of prompting is inherently a way of teaching it a little about what form the answer should take.

It’s an interesting finding! Not exactly one that proves the AI ​​apocalypse is impossible, but – if you want good news – one that suggests it’s unlikely to happen tomorrow.

Training pains

Nvidia is accused of ‘unjust enrichment’. Photo: Dado Ruvić/Reuters

Nvidia scrapped YouTube to train its AI systems. Now it’s coming back:

skip the newsletter promotion

A federal lawsuit alleges that Nvidia, which focuses on designing chips for AI, used the videos of YouTube creator David Millette for its AI training work. The lawsuit accuses Nvidia of “unlawful enrichment and unfair competition” and seeks class action status to include other YouTube content creators with similar claims.

Nvidia unlawfully “scraped” YouTube videos to train its Cosmos AI software, according to a lawsuit filed Wednesday in the Northern District of California. Nvidia used software on commercial servers to evade YouTube’s detection to download “approximately 80 years of video content per day,” the lawsuit says, citing an Aug. 5 404 media report.

This lawsuit is unusual in the AI ​​world, if only because Nvidia has been somewhat tight-lipped about the sources of its training data. Most AI companies that have faced lawsuits have prided themselves on being open about their disregard for copyright restrictions. Take Stable Diffusion, which drew its training data from the open-source LAION dataset. Well:

(Judge) Orrick ruled that the artists had properly argued that the companies violated their rights by illegally storing work and that Stable Diffusion, the AI ​​image generator at issue, may have been “substantially based on copyrighted work” and “specifically designed to facilitate that infringement.”

Of course, not every AI company is playing on a level playing field here. Google has a unique advantage: everyone gives permission to train their AI on their material. Why? Because otherwise you’ll be kicked out of the search engine altogether:

Many website owners say they can’t afford to block Google’s AI from summarizing their content.

That’s because the Google tool that combs through web content to find its AI answers is the same one that tracks web pages for search results, publishers said. Blocking Alphabet Inc’s Google in the way sites have blocked some of its AI competitors would also hamper a site’s ability to get discovered online.

Ask me anything

What was I thinking? Ask me these and other tech-related questions.

One more, self-righteous, note. After 11 years, I’m leaving the Guardian at the end of this month, and September 2nd is my last TechScape. I’ll be answering reader questions, big and small, as I wrap up, so if there’s anything you’ve ever wanted answered, from tech recommendations to industry gossip, hit reply and drop me an email.

The broader TechScape

TikTok is boring you. Photo: Jag Images/Getty Images/Image Source