Maybee I have so partial Information. When Storage Spaces came out I did a lot of testing how Storage Spaces behaves when parts of Data are going bad.
I tested: pull a disc, take Power Cord Off, all when doing write operations at the same time, manipulating Sectors of discs when vm was running/not running and so on. Every scenario that usually can happen. Nothing of all these things could really harm the Volume. Sure it went degraded but rebuilding was easy and always correct. No corrupt files. I tested with Mirrored Volumes and ReFs.
I tested the bad sector handling with following scenario:
Virtual Discs, mounted in VmWare with a ParaVirtual Controller to make it possible to add a disc to different VM's, powered on the same time
First I added the disc to a Windows Server 2012 and produced some data, on a volume with Storage Spaces
Next I added the disc also to a second VM and modified the sectors of the discs directly with a Hex-Editor (This does normally not work with a modern OS as direct hardware acces is locked, so I used XP if I can good remember). On next Storage-Checkup the bad sector was corrected with the data of the mirror partner. I have no idea how Storage Spaces really knows which one is correct, but it took always the right one. What was interesting, manipulated NTFS Volumes were also corrected the right way. It seems that a lot of these things are not bound with the Filesystem itself but with Storage Spaces.
What was also interessting is the fact that even NTFS was not currupt when I pulled the power cord and brought everything down when doing Write-Operations. On the other hand same test with a NTFS volume direct on the disc made problems. For me it seems that a NTFS-Volume on Storage Spaces does benefit also of some enhancements of ReFs. But these Full-Power-Down tests I did not that much with NTFS, first goal was to stress ReFs volumes max. So It could be also some kind of luck. Id did the same tests later with Deduplication On. No change at all. In my opinion Storage Spaces is bloody robust since the first hours.
Edit: Thats what makes 3-way mirror so cool, Storage Spaces will be able to handle such silent failures correct too. So full disc failure and also some bad sectores which were not found at the right moment on the mirror partner.
What I do not know is how a physical bad sector works. If it works the same way than a logically modified sector, than storage spaces will handle it correct.
These tests were the reason that I changed a lot from hardware raid-controllers to Software Raid with based on Storage Spaces. Directly from the beginning when it came out. First two years I used it just for VDI and some high performance things. To have a real life test. No problems at all. ESXi>windows as storage app>NFS>ESXi on the same machine (with NTFS, as ReFs has to long FileID's to offer it with NFS, you would need a NFS-Server with its own Filetable). It was really enormous fast and blowed my SAN far away because of low latency and SSD's. Now with NVMe it is even Faster. =)
The only thing that was really bad and does really suck was the "full disc-bug" of ReFs. So when it runs out of space. These was really a pain. But I think these problems are almost gone today. Before it could always happen because it is not the complete free space that counts. But these is another thing than what you asked.
EDIT: Well there is another thing too. Rebuilding with a disc that comes back and still has Data is not 100% sure it takes the Data which was always online as "reliable". I personally did never had this probelm, but other people had.
Tests I've done with high end SSD's and also with usual discs. Well for usual discs, write operations are a real pain because the lack of a good and especially working write-buffer. So I would always go with high quality SSDs with a really good average and a low max. latency on Storage Spaces.