Technology & Science World

Technology & Science World

Stuff Meets… tech author Nicole Kobie 

… the Future: Why tomorrow’s tech still isn’t here author … the Future: Why tomorrow’s tech still isn’t here, published … it work in the end. 
Tech needs more Halloween costumes 
My … think AI is an amazing technology applied correctly, carefully and thoughtfully …

Technology & Science World

Engadget Podcast: The death of 4chan (for now)

4chan, one of the trolliest places on the internet, could be gone for good following last week’s hack. In this episode, Devindra and Cherlynn break down what 4chan was and why it’s influence can be found practically everywhere now. It’s like we’re living in  a poster’s paradise. Also, we discuss YouTube’s 20th birthday and all of the memories (and frustrations) it’s given us over the years.

Subscribe!

iTunes
Spotify
Pocket Casts
Stitcher
Google Podcasts

Topics

4chan is dead, RIP? – 2:08
Youtube turns 20 – 15:59
Nintendo’s Switch 2 is finally available for preorder at the same price – 33:03
Apple and Meta fined a combined €800m under Europe’s New Digital Markets Act – 34:44
OpenAI might be interested in Chrome if Google was compelled to sell – 35:30
Google pays Samsung an “enormous” amount to put Gemini on phones – 37:50
The Washington Post partners with OpenAI to bring its content to ChatGPT – 38:43
Around Engadget – 41:52
Listener Mail: Transitioning from Windows to Mac for CAD / 3D design – 47:01
Pop culture picks – 54:55

Credits 
Hosts: Devindra Hardawar and Cherlynn LowProducer: Ben EllmanMusic: Dale North and Terrence O’Brien
This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/engadget-podcast-the-death-of-4chan-for-now-113033187.html?src=rss

Technology & Science World

Popular LLMs produce insecure code by default

A new study from Backslash Security looks at seven current versions of OpenAI’s GPT, Anthropic’s Claude and Google’s Gemini to test the influence varying prompting techniques have on their ability to produce secure code. Three tiers of prompting techniques, ranging from ‘naive’ to ‘comprehensive,’ were used to generate code for everyday use cases. Code output was measured by its resilience against 10 Common Weakness Enumeration (CWE) use cases. The results show that although secure code output success rises with prompt sophistication all LLMs generally produced insecure code by default. In response to simple, ‘naive’ prompts, all LLMs tested generated insecure… [Continue Reading]

Scroll to Top