ChatGPT provided bomb dishes and hacking pointers throughout safety and security tests

OpenAI and Anthropic tests discovered chatbots happy to share directions on nitroglycerins, bioweapons and cybercrime

A ChatGPT model provided scientists outlined directions on just how to bomb a sports location– including powerlessness at certain fields, dynamites dishes and suggestions on covering tracks– according to safety screening performed this summer.OpenAI’s GPT-4

.1 likewise detailed how to weaponise anthrax and how to make 2 kinds of illegal drugs.

Continue reading …

Source: The Guardian

Scroll to Top