No Data Corruption & Data Integrity
What does the 'No Data Corruption & Data Integrity' slogan mean to each hosting account owner?
The process of files getting corrupted due to some hardware or software failure is known as data corruption and this is among the main problems that web hosting companies face because the larger a hard disk drive is and the more info is kept on it, the much more likely it is for data to become corrupted. You can find several fail-safes, still often the info gets damaged silently, so neither the file system, nor the administrators detect anything. Consequently, a corrupted file will be handled as a regular one and if the hard disk is part of a RAID, that particular file will be duplicated on all other disk drives. In principle, this is for redundancy, but in reality the damage will be worse. The moment some file gets damaged, it will be partly or entirely unreadable, which means that a text file will not be readable, an image file will show a random blend of colors if it opens at all and an archive will be impossible to unpack, and you risk sacrificing your content. Although the most widely used server file systems feature various checks, they quite often fail to detect some problem early enough or require a long period of time to check all the files and the web hosting server will not be functional in the meantime.
-
No Data Corruption & Data Integrity in Shared Hosting
We warrant the integrity of the info uploaded in every single
shared hosting account that is made on our cloud platform since we work with the advanced ZFS file system. The latter is the only one which was designed to avert silent data corruption using a unique checksum for each and every file. We shall store your information on a large number of NVMe drives which function in a RAID, so the same files will exist on several places concurrently. ZFS checks the digital fingerprint of all files on all drives in real time and in case the checksum of any file is different from what it has to be, the file system replaces that file with an undamaged copy from some other drive in the RAID. There's no other file system which uses checksums, so it's possible for data to get silently damaged and the bad file to be replicated on all drives over time, but since this can never happen on a server running ZFS, you do not have to concern yourself with the integrity of your information.
-
No Data Corruption & Data Integrity in Semi-dedicated Servers
We have avoided any possibility of files getting corrupted silently due to the fact that the servers where your
semi-dedicated server account will be created take advantage of a powerful file system named ZFS. Its advantage over various other file systems is that it uses a unique checksum for every single file - a digital fingerprint which is checked in real time. As we keep all content on multiple NVMe drives, ZFS checks if the fingerprint of a file on one drive matches the one on the other drives and the one it has stored. If there's a mismatch, the bad copy is replaced with a healthy one from one of the other drives and considering that it happens in real time, there is no chance that a damaged copy could remain on our servers or that it could be duplicated to the other hard drives in the RAID. None of the other file systems employ such checks and what's more, even during a file system check after an unexpected blackout, none of them will find silently corrupted files. In contrast, ZFS won't crash after an electrical power failure and the continual checksum monitoring makes a lenghty file system check unnecessary.