Unsolved
This post is more than 5 years old
49 Posts
0
17843
November 7th, 2003 02:00
How To: Defrag your page file
Just bumped into a program I knew I had on my old computer, but seemed to have forgotten. This is an AWESOME little FREEWARE program that will help just about everyone using Win XP / 2000 / NT. It's called pagedefrag, and it makes it easy to defrag your page file.
What does defragging the page file do for you? It basically makes everything run faster. Windows XP uses the page file as temporary storage for files and commands while it's executing other commands. This tends to help when windows has multiple commands sent to it to execute while doing something else. Defragging the file means that your hard drive doesn't have to skip around between sections when it has to use the page file, the same way that defragging of the rest of your hard drive works.
Here is a quick set of instructions on how to optimize your page file. If you already have a static page file, skip to step 3.
1) Make a static page file, about 2x to 2.5x the size of your system memory, up to a max of about 1024MB. Anything larger really won't get much use and just wastes disk space. This keeps Windows from constantly re-sizing your page file and likely fragmenting it. You can do this by going to control panel > system > advanced > performance > settings > advanced > change (virtual memory). In the two boxes for custom size, type the same number in each box, then click the set button. It will prompt you for a reboot, so go ahead and do it.
2) Defrag your hard drive. Basically, this will ensure that you have a large enough contiguous free space on your hard drive to eventually have a contiguous page file.
3) Download pagedfrag from the following address:
http://www.sysinternals.com/ntw2k/freeware/pagedefrag.shtml
4) Extract and run the program. FYI - Take note of the number of fragments listed in the list of files. A non-fragmented file has 1 fragment. My page file on my newer system had 17 fragments before this evening. Click the radio button for defrag at next boot, then click OK.
5) Exit windows and reboot. During the boot up sequence, the program will run, and you'll see it running the similar to how chkdsk looks when it runs. After it's done, your machine will finish booting to windows. You could notice improved performance in everything from games, to loading applications to starting and exiting windows.
That's it. Simple, easy, and a great performance tune-up. Enjoy!



Denny Denham
2 Intern
•
18.8K Posts
0
November 7th, 2003 03:00
Nifty little program and not only is the price right, at a size of 35K it's a pleasure for those of us using stone age connection (aka dial up) to download.
bob42
4 Posts
0
November 7th, 2003 04:00
While the defragmentation is a good thing, I have one issue with the procedure. It is NEVER a good idea to set an upper limit on virtual memory or pagefile with Windows for normal use. Setting the minimum is fine, it will start with a file that size and make it less likely it will need to take the time to resize it. But a maximum, no, never. The operating system needs memory to operate...it first uses RAM, then uses 'virtual' RAM on the hard drive, called the pagefile or virtual memory. If you tell it that it can't have memory that it requires to operate, you (if you're lucky) get out of memory errors; if you're not so lucky, you may get random crashes, reboots, lockups, with no explanation. You can't ask the impossible, the memory is a necessity, if you deny it, the machine can't function.
While your recommended 1 GB size is probably large enough to handle most situations, the danger lies in folks who may be running short on HDD space, so they maybe set it lower. Or, video and sound editing, and even large photo files could possibly cause a need for more virtual memory than assigned, especially if several applications are open at once.
So set a good high minimum size that you'll probably seldom or never need to have increased, but don't set a maximum, it's asking for trouble.
I do have one other issue about the twice or 2 1/2 times the RAM as a formula. Several years ago I read about how this came about, but it's not from Windows, any version, and it's not a good formula. Think about it...if you have less RAM, you will be needing to use the swapfile sooner, and will need MORE space, not less, which your formula would give. I know similar formulas have been printed in magazines and elsewhere that should know better, but there is no question, it is wrong. Less RAM, larger swapfile needed; more RAM, less swapfile needed.
One way to see what might be a good minimum setting is to watch the size of the file as you do your most demanding work. On NT/2k/XP machines, it's pagefile.sys, and on Win9x/ME machines its win386.swp, and you may have to enable system file viewing in Windows explorer to see it. Set the minimum a healthy amount more than it grows at the most demanding times. With large hard drives, you don't have to be so precise and may choose to just make it large, but don't cripple Windows by setting a maximum on it, just in case.
Gary
NHRARacer
1 Rookie
•
45 Posts
0
November 7th, 2003 07:00
bob42
4 Posts
0
November 7th, 2003 22:00
As long as you set it high enough to cover every possible situation, you won't get errors. But it still doesn't make it correct. Think about the logic...you're putting a limit on the memory that Windows can use. As long as it's more than it will ever use, you'll never know the difference (except having a little less hard drive space). But if it ever needs more, what will it do, it cannot run without memory....it's not as though it just uses the memory as a nice option.
So if you don't mind just making sure it's more than you'll ever need, that's fine, but it really serves no purpose beyond assuring you of a crash if you're wrong.
As I mentioned, this idea came from way back, I think perhaps one of the Unix systems, where there was some purpose for setting it as a multiple of memory, but it's been so long I've forgotten the exact explanation.
And as I said also, there is a reason for setting a minimum size that covers anything you'd get in normal use; that way you don't have Windows resizing it as the need for memory increases. With disk drives as fast as they are, it's unlikely you can really ever tell any difference, but it still makes sense to do it. The top limit though, doesn't have any logical explanation that I'm aware of. Plus, if your maximum, left-alone swapfile use for normal operation is 200 MB and you set it to a GB, you're tying up 800 MB of disk space for nothing. This may not be an issue to those with large drives, but to others it may.
Unfortunately (or maybe fortunately) psychological effects will often make something you do in hopes of increased performance really seem to work. Benchmarks are the only way to really evaluate that, and I feel safe in saying that putting an upper limit on swapfile use won't make a bit of difference in that. Kind of like washing your car makes it run better and quieter...8^).
Gary
jjmspartan
49 Posts
0
November 8th, 2003 00:00
Bob,
Although you make some interesting points, I must disagree with the main point of your argument. Setting an upper limit on the size of the page file is not bad, it's good for most users. In fact, setting a maximum on your page file can actually keep Windows from crawling to a stop due to a user trying to do more than their system is capable of. Here is the simple logic of the equation.
Virtual memory (the page or swap file) is slow. It's a good place to store a large data file that you may be accessing parts of for use in a program, or setting aside data for a non-active window. But it's too slow to be used for typical in-application processing tasks - which is what physical RAM is for. While in theory, it would be great to have an unlimited swap file for data use only, Windows doesn't have the discipline (and it shouldn't have) to keep open applications from being swapped into the page file. Windows will allow a user to load seemingly endless programs and their data as long as there is virtual memory available. It will allow the user to use every bit of physical and virtual RAM to load 5 windows of Internet Explorer, Outlook, Word, AIM, Qucken, MusicMatch JukeBox and Photoshop, along with all the data those programs are accessing.
Needless to say, if you have 15 programs and their data open, your machine will likely run very slowly. The reason for this is that there isn't enough physical RAM to have all those programs loaded, it has to move some into the swap file. Because Windows has no idea which program you will access next, it will be constantly moving those programs in and out of RAM and in and out of the swap file, to free up ram for computing use for the active program. Then when you switch active windows, it has to move the data back from the swap file into physical RAM, and move something else out. By moving programs into the page file, it basically creates a memory log jam because of how slow the swap file is compared to physical RAM. The more programs or data chunks that are loaded into it, the slower it gets to access each chunk. And the slower it runs, the slower the entire system runs.
Again, Windows XP likes to have a lot of physical RAM to run smoothly, because of it's NT core and all the background services and apps that it runs constantly. When it has to swap those out of RAM and into the swap file, the whole machine slows down. That's why WinXP systems run best with 512 MB RAM and above configs. 512 MB is a high enough number that most programs will load, along with all of the XP OS into physical RAM, and not have to be swapped out into the swap file under normal use. The swap file is then primarily a data vault, and there is sufficient RAM to keep from thrashing the swap file. Of course if you open multiple programs with heavy ammounts of data, you can quickly get into the same situation of swapping the programs out to the page file.
Admittedly, this is not for every user. If your primary use of your computer is for editing 10 GB video files, then by all means, a larger swap file will be useful. But outside of professional video editing or CAD systems, there are very few users who will ever need more than 1 GB swap files for their home systems. There's a reason that many web sites and publications suggest exactly what I did - a static file about 2x the size of the physical RAM. And it's as relevant now as it was when it was first suggested.
John Redcorn
410 Posts
0
November 9th, 2003 08:00
I'm sorry, but I got a chuckle from that line. Setting a 1GB page file is NOT telling Windows it can't have memory to work with.
John Redcorn
410 Posts
0
November 9th, 2003 08:00
That's logic alright, but you make it sound like people are setting their upper limits at 100MB. They are not, they are setting it much, much higher. Way more than Windows will use.
You basically cleared up your own confusion, without knowing it. You mention setting a minimum saves from resizing, but then you wonder why we should set a maximum. It's for the same reason: prevent wasted time resizing. If you are not going to set a maximum, you may as well not set a minimum either. There is nothing wrong with letting Windows decide what is best. For some of us we prefer to squeeze a little more performance out of our machines and this is why we set min and max.
You may not think 512MB is much memory in these days of 2GB machines, but 512 is a TON of memory for normal use. Several months ago I turned off paging to do a thorough defrag. Then I forgot to turn paging back on. For the next couple months I ran XP on just the physical 512MB. No video editing, but I use Photoshop extensively. Even AOL running with PS and XP couldn't use it all up. I never got an error or freeze. I just happened to notice at one point that paging was off. In my opinion, normal users like me will never have a problem with a fixed page file size of 1GB.
Message Edited by John Redcorn on 11-09-2003 04:53 AM
Verns600m
130 Posts
0
November 9th, 2003 21:00
bob42
4 Posts
0
November 10th, 2003 01:00
I think you're missing the point. Yes, you can use the shotgun approach to just assign so much memory that you think it will never need more. It may never need the GB, but it also may never need 500, so why not set the minimum to 512 or 300, adn if it ever needs more you'll probably never notice the milliseconds it takes to up the size?
The point is, you set the minimum to where your heaviest swapfile usage for your normal use is covered. But if...and only if...you buy a brand new version of Photoshop and load up a 16x20 scanned photo with a few layers, and you exceed that size you've set...instead of a bsod, lockup, or restart, it just adds to the swapfile and goes on its way.
So yes, if you set your swap maximum higher than you'll ever possibly need, you'll be OK...but what is it you accomplish with the top end cap? We are truly talking milliseconds here in resizing the swapfile, are you really going to notice?
Or, let Windows manage your swapfile and do one extra defrag per month and you'll probably save way more time with faster disk access than you'd ever get from having to upsize the swapfile on occasion. Actually XP sets a pretty large pagefile on its own anyway.
Otherwise, use whatever you want, as long as you set it high enough, I just fail to see what you accomplish. Sometimes the tweaks that are so popular to use either do nothing or actually have adverse effect.
Gary
Message Edited by bob42 on 11-09-2003 10:45 PM
jjmspartan
49 Posts
0
November 11th, 2003 00:00
Actually, you're missing the point. The original point of the message was about defragging the page file to speed up performance. It's been proven by countless users that a defragged page file works significantly faster than a non-fragmented file. Windows does a horrible job of fragmenting the hard drive, since every time it needs the extra space, it simply grabs the next available space on the hard drive, whether it's all one space, or if it's 157 individual fragments. When it no longer needs the space, guess what - Suprise! it may say it has a smaller swapfile, but it still has the space the previous excusrsion over the minimum used. And the next time it goes over, it uses that space, and when it goes past that, it grabs the next chunk it needs and so on. Over time, it eats up an increasing amount of space that is ever more fragmented - and SLOW. Fragmenting just slows it down that much more.
Since most users of newer machines now have 80+ GB hard drives, the 1 GB space for a swap file doesn't hurt. The static limits are set to ensure that the swap file stays in one size and doesn't fragment. 1 GB is large enough for most normal home users. As mentioned before, CAD users and Graphics professionals may have a need for a larger swap file due to the amount of data they manipulate. Personally, my CAD station at work has 4GB RAM and a 6GB Swap file, but I'm moving ridiculous amounts of data in a FEA program. But that's work, and this is home - here i have 512 MB RAM and a 1GB swap file. I may upgrade to 1 GB RAM due to some gaming limitations i'm seeing. Every time my game has to touch the swap file, i instantly know it, because something slows down.
So why do users need to manage the size of their swap files? The reason for this is simply that Windows is a HORRIBLE manager of the hard drive. It fragments a hard drive so badly that it must be defragmented quite frequently - monthly is a good schedule. And the issue is that Micro$oft won't even own up to the fact that fragmenting slows down their OS! Only in WinXP did they even include a defragmenter with their NT based OS's. When they are so completely blind to their own OS's limitations, what makes you think they have a clue about fragmenting of their swap file? And about ways to manage it? They don't have a clue.
You seem to keep thinking that the advice given here is based on magazines and inaccurate reporting. Trust me, swap files are talked about on a LOT of gaming message boards (like the gaming boards here) From a lot of different users, the same conclusion comes - to maximize your experience, it's best to set a static swap file, and defragment it. It gives the best system performance.
Verns600m
130 Posts
0
November 11th, 2003 02:00
jjmspartan,
(1) on the contrary, Windows does an excellent job of fragmenting the hard drive
(2) "newer machines" should read "newer desktops", because pitiful users of new notebooks such as myself only have HDD of 30GB or 40GB or 60GB as available option upon purchase (in most cases)
(3) M$ acknowledged the fact you stated, as in the summary of this link:
http://www.microsoft.com/windows2000/techinfo/administration/fileandprint/defrag.asp
(4) bundled defragmenter came with Win2K (pre-XP) as noted in (3)
i think i agree with the rest of your statements =)
just my 2 cent =)
jjmspartan
49 Posts
0
November 12th, 2003 01:00
newtonet
107 Posts
0
November 19th, 2003 01:00
Sorry to bring this thread to the top again but I found it quite interesting. I am wondering if this is worth while with my machine which only has:
1.80 gigahertz Intel Celeron
8 kilobyte primary memory cache
128 kilobyte secondary memory cache
256 Megabytes Installed Memory
Slot 'A0' has 256 MB
Slot 'A1' is Empty
jjmspartan
49 Posts
0
November 19th, 2003 01:00
Glad you asked.
IMHO (obviously not shared by EVERYONE...) setting a static page file and defragging it is good for just about all home users. Most users will see a noticable improvement in desktop performance from the defragging of the swap file. In fact, defragging is typically called "a gamer's best friend" because of the advantages it offers while running your favorite game (BF1942 in my case). The program mentioned in my original post is free, and I never pass up good free software that can improve the performance of my system.
Good luck with your box, and let us know if you agree with our assesment of improved performance after you defrag.
newtonet
107 Posts
0
November 19th, 2003 01:00