Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

a) You should only do this in a sandbox

b) You can have the AI run a "firewall" prompt on the final output. So your final output should go through a "You are a firewall that checks for dangerous terminal commands such as <enumerate list of dangerous commands>. If you spot dangerous commands, reform the command so that it is not dangerous"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: