Jump to content


 


Register a free account to unlock additional features at BleepingComputer.com
Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site.


Click here to Register a free account now! or read our Welcome Guide to learn how to use this site.

Photo

unlike Windows, the more one runs their Linux OS, the faster it gets


  • Please log in to reply
5 replies to this topic

#1 Al1000

Al1000

  • Global Moderator
  • 7,396 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Scotland
  • Local time:05:42 AM

Posted 09 February 2015 - 07:19 AM

Can anyone explain how this happens? I've heard this before, but wonder if it's a myth.

I would guess that Linux doesn't slow down the more one runs it like Windows does, because there is no registry in Linux. But if anything, surely installing updates and other software would cause it to slow down, even if not noticeably? It seems to me that certainly on an HDD, the more space an OS takes up on a partition, the further the "arm" with the laser (not sure what the technical term for it is) on it, has to move, to read and write to directories on the HDD.

Kubuntu 14.04 is the OS I have had installed the longest; since not long after it was released, and I haven't noticed it getting faster. It takes quite a bit longer to boot up than Mint 17.1 Xfce, which I have installed on another partition on the same computer, but then Kubuntu is now over 12GiB because of all the software I've installed to it, whereas Mint is less than half the size because I only installed it recently and have installed hardly any software to it.

Kubuntu 14.04 has some process (python3 I think) which runs at 100% in one core, for a few seconds, every time it boots up. What it's started doing recently is after it does this, the same process then runs in the other core of the CPU for a few seconds, whereas it didn't used to. So I'm guessing this process is designed to use one core for a few seconds, then switch to another one if it hasn't finished doing whatever it does, and so the reason I am seeing this happening is because the process is taking longer to do whatever it does, than it used to.

So I would be interested to hear, if anyone here knows, how it is that Linux is supposed to get faster the more one uses it.

Edited by Al1000, 09 February 2015 - 07:21 AM.


BC AdBot (Login to Remove)

 


#2 bmike1

bmike1

  • Members
  • 596 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:Gainesville, Florida, USA
  • Local time:11:42 PM

Posted 09 February 2015 - 12:25 PM

"the more one runs their Linux OS, the faster it gets"

I've only heard that on this website. in my experience it is not true. it does not slow due to viruses and the like is the only thing I will agree with.


Edited by bmike1, 09 February 2015 - 12:28 PM.

A/V Software? I don't need A/V software. I've run Linux since '98 w/o A/V software and have never had a virus. I never even had a firewall until '01 when I began to get routers with firewalls pre installed. With Linux if a vulnerability is detected a fix is quickly found and then upon your next update the vulnerability is patched.  If you must worry about viruses  on a Linux system only worry about them in the sense that you can infect a windows user. I recommend Linux Mint or, if you need a lighter weight operating system that fits on a cd, MX14 or AntiX.


#3 Guest_hollowface_*

Guest_hollowface_*

  • Guests
  • OFFLINE
  •  

Posted 10 February 2015 - 01:40 AM

unlike Windows, the more on runs their Linux OS, the faster it gets


This hasn't been my experience (I find things slow over time), but if it were true then, in my eyes, it's a poor design; I believe an OS should be made to initially run as fast as possible (given what it is offering) and attempt to maintain that speed over time.

#4 mremski

mremski

  • Members
  • 493 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:NH

Posted 10 February 2015 - 09:54 AM

Original statement of "more one runs Linux the faster it gets" is a bit odd.  Linux will aggressively try to keep as much information in RAM as possible because accessing RAM is faster than accessing a disk.  So if you keep the system booted, with a relatively constant usage profile (same programs used, etc) more and more information will be stored in kernel buffers.  This means that loading the same program will take place from these buffers in memory instead of having to be read from disk.  Not sure how Windows does things, but it sure seems to go out to disk a lot more often.  If you start to approach 100% of memory usage, the OS has to do more work switching things around as they become runnable (context switches).

 

Multiple cores in a CPU:  this lets the OS partition work across them.  This is often a good thing, but now instead of running multiple code segments on a single core you have to worry about scheduling runnable entities across multiple cores.  Toss in memory coherency and cache synchronization across multiple cores and you're in for a fun time at the OS level.Multiple cores and multiple threads gives the overall appearance of doing more, but context switches have a cost.  On a computer, programs are often waiting on a resource:  disk drive, network socket, keyboard.  When you hit a key, typically an interrupt is generated which must be handled in a very short timeframe (interrupt bound cpus are useless).

 

How fast a system boots is a measurable quantity to a user, how "fast" something is, is often very subjective to a user.  


FreeBSD since 3.3, only time I touch Windows is to fix my wife's computer


#5 cat1092

cat1092

    Bleeping Cat


  • BC Advisor
  • 6,998 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:North Carolina, USA
  • Local time:12:42 AM

Posted 11 February 2015 - 04:45 AM

This Topic, I believe has came from a statement I've made on many occasions in the past 3 years or so. None of which is fact that I can prove with measurable tools, but have shown to be real life experience for me, which counts for something, and there's been some who has agreed with me. I'll try to explain. 

 

First, lets begin with Windows. Though NTFS was an improvement over FAT32, it's still by far, compared to Linux filesystems, inferior. Here's why. The more one runs Windows, even if temp files are kept cleaned (recommended) off the drive, it slows down after 4-6 months & will be noticeable, much more so on a spinner drive than SSD. One huge reason, and again, this is on the the filesystem, fragmentation of files. Can't avoid it on a spinner, but can on a SSD to an extent. This is because in practice, an SSD will clean & reclaim unused blocks in realtime, whereas with a spinner, it doesn't happen. Even on an SSD, it'll marginally begin to slow as more software is added. 

 

The end result, becomes a OS that will need reinstall in 12-18 months to keep speed like new. Maybe sooner, if the user installs/uninstalls a lot of software. 

 

Now lets get to Linux & why it can be ran for years, never needs reinstalling (unless broken, need upgrading, or desired), again it because of the filesystem. It doesn't get fragmented because of proper placement of files from the go. The result, an OS that gets faster (up to a point), the more it's ran. While I've read many forum Topics & other articles that covers slowing of Windows OS's over the years, have read none that I can recall about a Linux OS doing the same. And yes, my Linux installs, all of them, runs faster than when they were installed. Even the boot process, where Windows has the advantage, being that I use EasyBCD 2.2 as the bootloader, when I choose Mint 17.1, even though it still goes though the Grub process, boots in nearly the same time as Windows. 

 

That is a feat in itself, having to travel through two bootloaders, yet firing up nearly as fast as any of my Windows OS's. When all was newly installed, this wasn't the case. The ext4 filesystem is a superior one, and here's the benchmarks to prove it. 

 

http://phoronix.com/forums/showthread.php?1765-PERFORMANCE-OF-FILESYSTEMS-COMPARED-%28includes-Reiser4-and-Ext4%29.

 

Of course, with increased speeds, there's always the risk of data loss, however I've not had this to happen to me as of this writing. Here's an article that covers ext4 well, which includes the befits of the filesystem, as well as it's limitations. 

 

http://en.wikipedia.org/wiki/Ext4

 

Being that I'm running both Linux & Windows, makes it hard for me to nail down the speeds of both, but I have to say this. Unlike Windows, Linux doesn't get slower as it's used. If anything, it gets a bit faster over time, though it doesn't get faster each day. It's either that, OR. 

 

Windows gets slower each day, making it seem as Linux is getting faster. As long as LInux runs fast, I don't care how it gets done. 

 

Hopefully others will chip in on this discussion, there's been others to tell me the same, or has agreed with me, though they're not members of this forum. 

 

What do you think?

 

Cat


Edited by cat1092, 11 February 2015 - 04:46 AM.

Performing full disc images weekly and keeping important data off of the 'C' drive as generated can be the best defence against Malware/Ransomware attacks, as well as a wide range of other issues. 


#6 mremski

mremski

  • Members
  • 493 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:NH
  • Local time:12:42 AM

Posted 11 February 2015 - 07:06 AM

Yes, filesystems can make a huge difference, that's why there are so many choices in the *nix world.  As soon as one defines a blocksize, you're going to wind up with a file being stored in more than one block (assuming file size is greater than block size).  How this is done affects how fast files are read (and written).  Type of media also makes a difference, access patterns can also make a difference too.  Lots of brainy people have spent lots of time thinking and writing about this stuff (google up BSD filesystems).  Newer tech like SSDs wind up tossing in other fun stuff:  they act like a hard drive, but are flash technology which has some inherent limitations (google up erase blocks).  Stuff in front of flash hides a lot from you, does stuff like automatic wear leveling, bad block reassignment, etc.

 

Windows has always had issues with filesystems, some real some imagined, some more theoretical.  I'm not a Windows guy, so I'm going by stuff that professor Google shows.  Fragmentation (defined as files occupying noncontiguous blocks on the media) on filesystems is often unavoidable, how the OS piece deals with it determines performance.  One could avoid fragmentation by always reallocating contiguous blocks to cover the file, but performance would not be too good (you'd basically be copying a file everytime it grew larger).


FreeBSD since 3.3, only time I touch Windows is to fix my wife's computer





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users