Software always requires an interface (even if only for administration). We expect certain qualities from the user. Usually those include knowledge of the problem at hand and knowledge of the system that solves it. Often times we also unknowingly expect the user not to be an idiot.
I was in Professors Hughes class when he said this. It was in response to a student who asked something about how robust network protocols should be.
For a protocol to have any use, users have to do exactly what the protocol tells them to do. We expect some cooperation from a network's users. In that sense, a stupid (or malicious, more likely) user might be able to break the network. This is, of course, not only true for network protocols.
The question at hand is about how far we should go in fool-proofing a system. Obviously a monitoring system at a nuclear plant should be more fool-proof than a music player, but there's no clear all-encompassing heuristic, is there?
Given that it's nearly impossible to build a completely fool-proof system, I personally think we should always continue fool-proofing the system until no user could ever do any harm. Really, what we want is to give users an incentive not to break the system and no incentive to break the system. We should make sure the user could never gain anything from breaking the system, for example by rendering it unusable if the user tries to break it. To a truly 'stupid' (non-malicious) user this won't make any difference. That's why we also have to make sure that a user can never do any harm by using the system.
Ideally, we up with a system that ...
... works as intended to a user that uses the system correctly
... is rendered useless when a user tries to break it.
... stops the users from doing something 'stupid'
At that point, I think it's safe to go on to the next task.