keep in mind that smartfail will rebuild protection BEFORE the affected disk or node is removed from the cluster. The purpose is that of a 'graceful decommissioning' in the sense that you never run into under-protection.
If you want to see wether the actual protection of your cluster works as intended fo protect against data loss at sudden failures of components, you would need to simulate such a failure. Now I wouldn't suggest to remove a disk from a powered-on node on a system under support. But of course you can power down a single node and see wether the system behaves as expected. If your system is not yet in regular protection, power down a second node to get some experience with a situation where +2d:1n protection in not sufficient and some data will be unavailable. See how things clear up when at least one node is brought back online.
To simulate one or more failed disks failure, remove ithe disk(s) from a powered-done node and power up the node.
Alternatively you can use the 'stopfail' feature for disks or nodes (syntax as for smartfail). HOWEVER. unlike smartfail, the stopfail will take down the affected disk or node immediately (like by an actual HW or power failure), and only AFTER this the protection will be rebuild.
In either case, if your cluster is under support (and does 'phone-home' for critical events), check with support before doing those maneuvres. Fwiw, performing those maneuvres is highly instructive and will give you confidence for running OneFS in production.