Jump to content


 


Register a free account to unlock additional features at BleepingComputer.com
Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site.


Click here to Register a free account now! or read our Welcome Guide to learn how to use this site.

Photo

Question about File System (Didn't know where to post)


  • Please log in to reply
4 replies to this topic

#1 Barnack

Barnack

  • Members
  • 91 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Italy
  • Local time:02:07 AM

Posted 04 January 2016 - 09:19 AM

I'd really like someone to clearly explain me the differences between FAT32, exFAT, NTFS and EXT4 File System for what concerns Hard Disk usage (how files are written, read, deleted, and how defragmentation acts efficiently depending on the file system)

That's because in different websites i've found different explainations that usually are completely different, or even completely opposite.

 

P.S. (to admins)

since i didn't know where to post the question feel free to move it in the proper location, ty


Edited by Barnack, 04 January 2016 - 09:20 AM.


BC AdBot (Login to Remove)

 


#2 Naught McNoone

Naught McNoone

  • Members
  • 308 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:The Great White North
  • Local time:08:07 PM

Posted 04 January 2016 - 05:39 PM

. . . the differences between FAT32, exFAT, NTFS and EXT4 . . . how defragmentation acts . . . 

 

Barnack,

 

The easiest plain language source on these that I have found is wiki.

 

https://en.wikipedia.org/wiki/File_Allocation_Table#Original_8-bit_FAT

https://en.wikipedia.org/wiki/NTFS

https://en.wikipedia.org/wiki/Extended_file_system

 

As for an explanation of de-fragmentation, that may take some more doing.

 

Defraging comes to us from the old Winchester, MFM, and RLL drives of the ancient times.  (Current events for me!) ;)

 

In those days you had to set up hdd's manually.  IDE (Integrated Drive Electronics) changed all of that, but some of the old tools seem to have hung around.

 

One of the things you had to do when low level formatting an old drive was set the drive interleave.  This was to match the speed of the spindle with the speed of the read/write head.

This would give you the fastest read/write time on that particular drive.  Every make and model was different.

 

Fragmenting occurred when files changed size on a regular basis.  Take database, as an example.  Most database files grow, and grow, and grow.  The original file does not take long to exceed the size of the sector(s) was first written to.  So, the file is split, into several parts, all located in different parts of the hdd.  

 

Those parts do not necessarily mean that the file is "fragmented."  As long as the parts are sequential, then there should be no noticeable difference in the speed at which the file is read into memory.

 

Now, as the hdd gets used, filled up, old files deleted, new files added, the free sectors get scattered all over the disk.  The file system keeps track of which are the oldest unused sectors, and tries to use those up first.  (That is why you can use things like un-delete and shadow copy to recover old files!)  

 

The result is that the spots on the disk where the large files are stored are no longer sequential.  The heads have to do more work to get the file loaded into memory.  So, now the file takes longer to load.

 

That is what made correct drive geometry, especially interleave, important.  It reduced the seek time, because the physical arm had to do less movement.

On older drives, where the interleave was wrong (more common than you would think,) defragging had to be done often.  

 

As drives got larger, systems improved, and interleave became automatic (IDE did that,) defrag became more and more redundant.  In most cases, with modern SATA drives, you should never have to defrag a drive.  Some techs will tell you to do it, but I have never found any appreciable increase in read/write speed on the newer drives, after defragging.  I imagine it would be a complete waste of time on an SSD drive.

 

BTW, the fastest way to defrag a drive.  Copy the files to a backup drive.  Format the original, then copy the files back.  All the files will then be written sequentially.

 

Cheers!

 

Naught



#3 ranchhand_

ranchhand_

  • Members
  • 1,710 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Midwest
  • Local time:06:07 PM

Posted 04 January 2016 - 07:01 PM

Good quick overview given by Naught.

In addition, it's important to include the new SSD drives that are rapidly replacing standard HDDs. As Naught Mentioned above, HDDs will, over time, fragment clusters and the drive will take longer to find and amass information because it accesses data sequentially, thus defragging occasionally is good. Not so with SSDs, since the memory access is random. Therefore you do not want to perform a standard defrag on an SSD, it actually will not help performance, and will reduce the life of the drive. If you really want to defrag an SSD (not actually necessary in real-time usage) you must use the TRIM command. I believe (I stand to be corrected if I am wrong!) that Windows 7, 8 and 10 utilize the TRIM command when an SSD is detected.

Just a thought...


Help Requests: If there is no reply after 3 days I remove the thread from my answer list. For further help PM me.


#4 Demonslay335

Demonslay335

    Ransomware Hunter


  • Security Colleague
  • 3,561 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:USA
  • Local time:07:07 PM

Posted 04 January 2016 - 07:05 PM

Good quick overview given by Naught.

In addition, it's important to include the new SSD drives that are rapidly replacing standard HDDs. As Naught Mentioned above, HDDs will, over time, fragment clusters and the drive will take longer to find and amass information because it accesses data sequentially, thus defragging occasionally is good. Not so with SSDs, since the memory access is random. Therefore you do not want to perform a standard defrag on an SSD, it actually will not help performance, and will reduce the life of the drive. If you really want to defrag an SSD (not actually necessary in real-time usage) you must use the TRIM command. I believe (I stand to be corrected if I am wrong!) that Windows 7, 8 and 10 utilize the TRIM command when an SSD is detected.

Just a thought...

 

You are correct. There are rare occassions where Windows will guess incorrectly and not enable TRIM on install, typically if it doesn't have the right driver or something.

 

You can verify TRIM is enabled on your installation with the following command:

fsutil behavior query disabledeletenotify

There will be two possible outputs:

DisableDeleteNotify = 1 ::TRIM support disabled

DisableDeleteNotify = 0 ::TRIM support enabled

Edited by Demonslay335, 04 January 2016 - 07:06 PM.

logo-25.pngID Ransomware - Identify What Ransomware Encrypted Your Files [Support Topic]

ransomnotecleaner-25.png RansomNoteCleaner - Remove Ransom Notes Left Behind [Support Topic]

cryptosearch-25.pngCryptoSearch - Find Files Encrypted by Ransomware [Support Topic]

If I have helped you and you wish to support my ransomware fighting, you may support me here.


#5 Barnack

Barnack
  • Topic Starter

  • Members
  • 91 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Italy
  • Local time:02:07 AM

Posted 10 January 2016 - 10:28 AM

Thanks for the answers; anyway i was asking specifically about how files are written/read/modifyed in the disk chunks...

 

Fragmenting occurred when files changed size on a regular basis.  Take database, as an example.  Most database files grow, and grow, and grow.  The original file does not take long to exceed the size of the sector(s) was first written to.  So, the file is split, into several parts, all located in different parts of the hdd.  

 

Those parts do not necessarily mean that the file is "fragmented."  As long as the parts are sequential, then there should be no noticeable difference in the speed at which the file is read into memory.

 

Now, as the hdd gets used, filled up, old files deleted, new files added, the free sectors get scattered all over the disk.  The file system keeps track of which are the oldest unused sectors, and tries to use those up first.  (That is why you can use things like un-delete and shadow copy to recover old files!)  

 

The result is that the spots on the disk where the large files are stored are no longer sequential.  The heads have to do more work to get the file loaded into memory.  So, now the file takes longer to load.

 

Thats a thing i already knew; what i'm asking about is HOW this works in chunks and the difference between previously mentioned file systems about andleing that.

Sry if i didn't explain correctly my question






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users