Search

Categories

Why is Windows so slow?

I’m a fan of Windows, specifically Windows 7. For the most part I like it better than OSX. I have 4 Macs, 3 Windows machines and 3 Linux machines that I access regularly.

But…I work on a relatively large project. Windows is literally an ORDER OF MAGNITUDE slower to checkout, to update and to compile and build than Linux. What gives? I don’t know this is the fault of Windows. As far as I know some of it is the fault of the software I’m using not using Windows in the correct way to get the maximum speed.

You can reproduce this yourself though. Download the code.

The simplest test is this: On Windows open a cmd, cd to the src folder and type

dir /s > c:\list.txt

Do it twice and time how long it takes. The reason to time only the second run is to give the OS a chance to cache data in ram

Now do the same thing in linux. Check out the code. CD to src and do

ls -R > ~/list.txt

In my tests, on 2 exact same machines (both are HP Z600 workstations with 12gig of ram and eight 3ghz cores) on Windows it takes 40seconds. On Linux is takes 0.5 seconds. Note I used git to checkout the files on both machines so there are more than 350k files in those folders.

Why is Windows in this case 80x slower than Linux? Is there some registry setting I can use to make it faster?

Similarly compile the code. Using Visual Studio 2008 follow the instructions here. Select the chrome project and build it. Edit one file, say src/gpu/command_buffer/client/gles2_implementation.cc. Just change a comment or something then build again. Try the same on Linux. For me, these incremental builds, on Windows it takes about 3 minutes, most of that time is spent in the linker. On Linux it takes 20 seconds. That’s 9x faster. I can install the new ‘gold’ linker and take it down even more. 7 seconds or 25x faster.

Come on Microsoft, step up your game! Personally I’d much rather program on Windows than Linux (yea, I know, sacrilege to some). Visual Studio’s debugger is far more productive than gdb or cgdb (maybe there is something better on Linux I don’t know). Plus, our users are mostly on Windows so I’d rather be on Windows so I get the same experience. GPU drivers are much better on Windows as well plus there are other apps I use (Photoshop, Maya, 3DSMax) that don’t exist on Linux.

But, I can’t stay on Windows with Linux being so much faster to build. It’s the difference between being totally productive and taking a coffee break every time I change a line and compile.

That’s not all of it either. git is EXTREMELY FAST on linux where as on Windows not so much. It’s probably no slower than svn on Windows as far as I can tell (I haven’t timed it) but one of the many reasons people switch to git is because it’s so fast on linux. Again it’s in the 10x to 100x difference between Windows and Linux.

All I can think of is 99% of developers that use Microsoft’s tools are writing Windows only code. As such they have no way to compare times and so Microsoft has no incentive to make it better. Except of course they do. If they themselves are using the same tools then their own developers are losing valuable time waiting for these tools to do their jobs.

Here’s hoping Microsoft will step it up.

PS: Of the 3 OSes, OSX is a mixed bag. git on OSX is slower than linux, faster than Windows. Building on OSX though is SSSLLLOOOWWW. Nearly 3 times slower than Windows using XCode.

PPS: Chromium has the option to build as a set of shared libraries instead of one large executable. This helps the link times in Windows significantly but it also helps link times in Linux. The relative speeds are till the same.

  • Homestar Ruiner

    Tell me about it. When I compile “hello world” in Turbo Pascal, the .com file is 6k but in Microsoft C the .exe file is 9k!  WHERE ARE THOSE EXTRA 3K COMING FROM MICROSOFT

  • http://profiles.google.com/jbverschoor Joris Verschoor

    You should read about the difference between a .com and a .exe. 

  • http://twitter.com/ajasmin Alexandre Jasmin

    Plus Win32 PE .exe are also a DOS MZ .exe so that they can print a “Please run this under Windows” message when your run them on DOS.

    You basically get two hello worlds in one.

  • http://profiles.google.com/rxantos Ricardo Santos

    Thats nothing.
    Using masm, the minimum .com is 1 byte. (A single RET instruction).
    And a hello world .com about 32 bytes or less.

    The minimum valid DOS .exe is 1024 bytes (altough you can use a trick to make a 512 byte exe).

    The minimum valid windows .exe is 4K. (Altough you can use tricks to make it 1k).

    The trick to have a 4K exe under windows? Use the windows API instead of the C api. 

  • luefher

    Really?! Turbo Pascal???

  • http://twitter.com/NightLifeLover Nils

    Wondering about this too, should be a personal thing to make Windows equally fast (or even faster) than Linux for Microsofts Windows team. I hope somebody with a deep understanding of how Win works will post a detailed answer.

  • http://twitter.com/NightLifeLover Nils

    Maybe you should post this on stack overflow/-exchange.

  • http://greggman.com greggman
  • Branimir Lambov

    This answer probably comes very late, but have you tried disabling last access timestamps in NTFS?

    See http://oreilly.com/pub/a/windows/2005/02/08/NTFS_Hacks.html (the disable 8.3 names hack may also help).

  • http://barrkel.blogspot.com/ barrkel

    Windows 7 disables those by default.

  • anjan bacchu

    hi there,

    1) guess :  windows usually comes with anti-virus, anti-phishing and other anti-* stuff PLUS some backup software. Add equivalent stuff on linux and see how file-access slows down to a crawl.

      2) An ex-boss told me that when he was working on some CAD like software about 15 years ago, they found that the MS compiler (specifically the linker) was a lot more faster (10x OR MORE) faster than the equivalent unix compilers (not sure if they did any linux version at that time).

     3) ls -R should be replaced with ls -lR => that is equivalent to dir /R.

    BR,
    ~A

  • HerrWeigel


    about 15 years ago

    Long ago in a galaxy far far away…

  • Anonymous

    Ironically, I was just reading hacker news as I was waiting for Visual Studio to come back from “not responding” as it was building my project and came across your post.  I couldn’t agree more – and it very much feels related to disk I/O. After having coded the last 3 years in the Windows world after the 3 prior on Linux, I’ve come to believe you’re just putting yourself at an overall productivity disadvantage by choosing the Windows stack.  The “coffee break”-magnitude delays are crippling.  

  • http://profiles.google.com/uuf6429 Christian Sciberras

    You’re talking about Visual Studio compile speeds! Nothing related to the OS. Switch over to Delphi and you’ll see a compile speed increase of up to 300%.

  • Manuel Kröber

    We’re working in Delphi and a project with about 950K lines takes about 20-30 seconds for a complete re-build. Incremental build is done in 10 sec with the linker taking up the most time.

  • Darren Wade

    You mean people still use Delphi?  I had thought it a lost cause.  I’m glad someone still carries the torch.

  • JPaul

    Bro, look at this link http://delphifeeds.com and you will see how much Delphi is very much alive.

  • http://twitter.com/bjmaz Brett Wilkins

    The issues that you’ve mentioned mostly sound like filesystem/IO issues. Commenter who mentioned antivirus etc could also be correct.

  • http://randomfoo.net/ lhl

    Besides tweaking NTFS, would also be interesting to try an ext4 partition w/ something like Ext2Fsd ( http://www.ext2fsd.com/ ) and potentially, trying out Cygwin to see if it’s a shell or a FS issue.

  • http://barrkel.blogspot.com/ barrkel

    You’re comparing wildly different things. MSVC supports link-time code generation, which enables intraprocedural optimizations; this can substantially increase apparent link-time, which is actually code generation time. MSVC also usually generates better code than gcc.

    Git is written by a (or rather, “the”) Linux kernel guru; it’s only natural that it is tuned to it. But probably it is dominated by filesystem access, your other beef. It sounds like you are not using an SSD. In that case, fragmentation may be a problem; when free space is fragmented, NTFS can create badly fragmented files, particularly for large files that grow incrementally, such as large directory listings or files written to by programs that don’t use SetFilePointer / SetEndOfFile to set the maximum file size before writing (this is typical of programs not tuned for Windows, likely to be the case for git). On the other hand, for truly massive directory listings (50,000+ files in a single directory), Windows can do well for lookup (as opposed to mere listing) because NTFS uses a btree-based on-disk structure to hold the entries.

  • http://twitter.com/nbevans Nathan Evans

    There’s hardly any difference between NTFS and the various Linux/Unix file systems in terms of data structures. What DOES differ is the default tuning options, especially block size but also things like write caching. Windows 7 is optimised for desktop and laptop PCs, which means a 4KB block size and various write caching set to their most conservative (to avoid huge amounts of data loss if there’s a power cut). Linux can get away with more aggressive default settings as it is catering to a different market.

  • Robbie Fan

    Mostly are very accurate to me, except one thing: “Enable advanced performance”, IIRC, it uses hard disk built-in write cache, and avoids flushing data to the disk even if the application instructs so, but the Windows part of cache still does the flush (that is, from RAM to hard disk built-inc cach memory).

  • Anonymous

     Is link-time optimization on by default? I thought it was an option. Now that you mention it, I would like to see a benchmark with gcc LTO vs. msvc LTO build times.

  • Tom Cook

    GCC does LTO, too.  See -flto.

    It’s also worth noting that the MS implementation of the STL can be quite inefficient.  This code:

    #include
    #include

    int main(int argc, char ** argv)
    {
      int x = 5;
      int y = 7;
      int z = std::max(x,y);
      std::cout << z <1MB and 107,000 lines.  I know the line counts are just newlines, not LOC, but still a fairly striking difference.  We’ve found some source files that make moderate use of the STL that preprocess to well over around 800,000 lines and tens of megabytes of text.  I don’t care how good your compiler is, when your STL evaluates to that many lines of code it is going to be slow.

  • Tom Cook

    And some times taken:

    VS2010 default release configuration options running on Windows 7 64-bit: 890ms.

    g++ 4.6 using ‘-O3 -flto’ running on Ubuntu 11.10 in a VMWare Server 2 VM on top of the Windows 7 64-bit machine: 483ms.

    You might quibble about whether the compile options are comparable, but it’s pretty stunning when you consider that g++ is having to through the filesystems of 2 operating systems.

  • http://twitter.com/codemonkeyism Stephan Schmidt

    I have been timing Java/Maven projects for the last 5 years on Linux and Windows. Linux is consistently faster, up to twice as fast with the same Compiler/Build Tool. Even after disabling Anti-Virus, Backup, Snapshots etc. for the build directory, Linux is faster. I do assume it’s due to the file systems involved.

    Best
    Stephan
    http://codemonkeyism.com

  • http://pulse.yahoo.com/_VAXUPLWSPNK55YLKNWXTOQHOPY q

    Like barrkel said, Git is heavily optimized for Linux (consult Git’s internals). There is no point considering it as a benchmark among operating systems.
    Windows got bloated, this is the sad truth. Just compare Windows 7 with older Windows versions, there is no simpler way to put it. Yes, older versions are missing some “very important” features but the fact is the fact. And Linux is getting the same fate, Mr. Torvalds himself admitted that:http://news.cnet.com/8301-13505_3-10358024-16.html

  • http://pulse.yahoo.com/_VAXUPLWSPNK55YLKNWXTOQHOPY q

    …and another thing – there is an open source version of Windows in work. It’s called ReactOS. You may contribute to it if you appreciate the NT Windows-like world.

  • Tom Burdick

    Run windows under linux using a vm and watch your Disk IO double in performance. Compiles for a 250Kloc project I worked on went from about 15min to about 6-7min. At least that has been my personal experience.

  • Portman Wills

    Saw this on HackerNews. I downloaded the Chromium tarball, unzipped, and from a command prompt ran this batch file twice to check the timings:

    echo start: %time% >> timing.txtdir /s > list.txtecho end: %time% >> timing.txtHere is the output of timing.txt

    start: 12:00:41.30 end: 12:00:41.94 start: 12:00:50.66 end: 12:00:51.31 

    So the first pass was 640ms and the second was 650ms.

    This is on Dell OptiPlex 980, 8GB RAM, i7 @ 2.8GHz, 64 bit OS, RAID0 HDD (not SSD).

  • http://greggman.com greggman

    hmm, I wonder what’s wrong with my system then

  • Portman Wills

    There’s some good suggestions here:
    http://news.ycombinator.com/item?id=3368771 

    – Disable NTFS last access time “fsutil disablelastaccess 0”
    – Cleanup git turd files “git gc”
    – Make sure antivirus is OFF that directory
    – Blame Google IT’s centrally managed Win7 deployment?

  • http://twitter.com/x_cubed Carey Bishop

    That command should be:

    “fsutil behavior set disablelastaccess 1”

  • http://twitter.com/Windsingerphox J. Scotty Emerle

    Tortoise on Windows is also slow on big projects with lots of small classes. It’s also got some dumb bugs and fails to Commit properly sometimes.

  • http://profiles.google.com/uuf6429 Christian Sciberras

    I love my Windows 7 :).

    The thing is, Windows 7 CLI system (including execution of exe’s too) wasn’t meant for speed. Think about it, Windows is a GUI OS. People use Windows mostly via GUI. In fact, I only use the CLI for very basic things. Anything more than that, and it’s PuTTY for me.
    Linux, on the other hand, is a CLI OS. It’s always have been such a thing. Go on, boot Linux, first thing you see is a CLI which you can easily interrupt and use. Is this bad? Not at all! It’s just different.

  • Dominic Amann

    On my workstation, opening explorer windows takes far longer on the Windows box than a directory listing on my half-the-cpu-speed linux box beside it. And to make matters worse, I can display a folder using Nautilus (the standard graphical explorer equivalent for Linux) *over the network* faster on Linux than I can a local directory on Windows.

    The bottom line is that everyone who really knows Linux, knows that it blows by Windows in terms of performance on the same hardware. Always has, always will. They only area that Windows beats Linux is the availability of games and shrink-wrapped apps.

  • Guest

    That’s funny. Writing high-speed, high-volume servers on Windows and Linux, it’s always the Windows version that blows by the Linux version – from 10% to sometimes 15,000%.

    Guess it all depends on what you’re doing and what part of the OS and hardware you’re exercising.

  • http://celebrity-smack.com Celebrity Smack

    good article. Thx.

  • Tom Cook

    And some times taken:

    VS2010 default release configuration options running on Windows 7 64-bit: 890ms.

    g++ 4.6 using ‘-O3 -flto’ running on Ubuntu 11.10 in a VMWare Server 2 VM on top of the Windows 7 64-bit machine: 483ms.

    You might quibble about whether the compile options are comparable, but it’s pretty stunning when you consider that g++ is having to through the filesystems of 2 operating systems.

  • Jeremy Franzen

    I found msysgit on Windows 7 to be far slower than it was on XP until I disabled the luafv driver using sysinternals autoruns. It has something to do with file virtualization associated with UAC. Merely disabling UAC didn’t help. I HAD to disable the driver to get the performance back to a reasonable level. Just doing a ‘git status’ went from taking several seconds to a fraction of a second. See http://code.google.com/p/msysgit/issues/detail?id=320

  • 畏友 頃遅効

    What’s funny is, because Windows is more geared towards threading, if you were to have two identical machines like those mentioned above, yet the systems had solid state discs, Windows should be on par with Linux with Indexing off. 

  • 畏友 頃遅効

    I say Indexing off because access times would likely go UP on a modern SATA-6GB/s drive.

    I’m currently running Win7x64 on an OCZ Agility 3 120GB (525MB/read 500MB/write) and I’d imagine Linux would perform as fast or a little slower on some of the latest Intel processors.

  • Rodrigo Ratan

    Why did you used DOS, an 15-year old interface, to list files? Don’t you know that DOS is there only for compatibility? Windows (not DOS) is optimized every version to get better performance in File API.

  • Anonymous

    I don’t think it’s fair to compare different compilers/IDE’s (you’re not using VS on Linux somehow, are you ?) and then blame the OS, but aside from that I once found that the speed of commands like

    dir /s > c:list.txt

    can be massively impacted by anti-virus. Although this was some years ago now, I found that without any anti-virus running opening ‘edit’ at the DOS prompt was instant – with anti-virus it took over a second. That additional time was added to pretty much and file open anywhere in the system, so it slowed the whole thing down. Modern anti-virus may be better though.

  • Sp

    First the speed of running Dir /s versus ls -R; Windows is notoriuously slow at writing to its console. Linux is much faster. You can play with different fonts and sizes in the cmd window in Windows and even that makes quite a difference. Try piping processes to a file (or nul) to measure the actual process speed.

    Second: Building with Visual Studio and its half-baked MSBuild version takes an age and a half. Try using a decent build tool (like gmake or NAnt) and you will find a different build story.

    Don’t get me wrong, I don’t love Windows, but lets have a proper contest so that the superiority of Linux is demonstrated without recourse to the half-truths that come out of Redmond.

  • Martin

    Hi, so you are working with three different OS.

    You should know comparing pears with apples usually do not make sense. Nice to read but not more. Just read the comments. A lot good facts are counted. Starting with FS, Scheduler, etc.

    Nice Christmas and a happy new your.
    Martin

  • http://twitter.com/tabinnorway Terje A. Bergesen

    For the record, there is no Windows 3.1 code in Windows today. Windows NT was a new operating system, written from the ground up (and slightly based on OS/2). Win16 applications run on Windows NT/2K/XP/7/8 through an emulation layer, very similar actually to Wine on Linux.

    The Win16 code base was discontinued after Win98/ME.

  • Luiz Felipe

    I think Microsoft doesnt care much with build speed, Windows is so big, that take hours to build, noone will stay stoped waiting to build finish. They have clusters to build in parallel on nighitly. They solve problem like all real business solve, they buy more hardware.

  • tmgoblin

    You might need to downgrade, “newer is not always better” (Will Rogers?). I just revived a circa 1996 compaq laptop running windows 95. It has a pentium 1 @ 120MHz, and 48MB of ram and a 880MB hard drive. After stripping the old antivirus, it boots from cold to a ready desktop in 30-40 seconds, with 88% 
    memory free (with running PWS, the old personal web server). Most impressively, log off and shutdown is complete in 5 seconds flat, and opening a text file in notepad – the blink of an eye. Deleting files, also practically instantaneous.  The installation files take up only 88MB, giving Damn Small Linux a bit of competition.

    I am going to replace the hard drive with a 8 gig CF card/IDE adapter and see if I can dual boot with DSL frugal or a debian with reduced kernel and see how they compare.

    Recent versions of windows can take way to long to shut down. Booting inst so bad, but its not 40 seconds by a long shot. Deleting a recently created empty file off the desktop can take as long as defragging a 1TB hard drive. Navigating files with explorer – worse with each release.

    If w95 is too extreme, W2K SP4 is probably the way to go – it was one of MS’s best.

  • Brodz

    Well I posted exactly what you told me to in cmd. And it was basically instantaneous. No wait at all. I went and checked the file and it was there, with the listed results. A 1mb file.

    My system (all desktop spec) runs an AMD Phenom II X2 @ 3.6GHz. 8GB of DDR3 RAM, and the OS (Win7 64bit Pro)alone runs on a SATAIII Bus to a SATAIII Patriot Pyro SSD. Then again if you have all your data and OS on the one HDD or RAID, then I’m not really running this test properly am I?

  • Claw

    Not to mention the time it takes to install the development tools and documentation. I had to install Visual Studio 2010 Professional, Windows SDK 7.1, and Qt 4.8.0 on my work computer two days ago, and the entire process took more than half of my office hour. Installing these tools took way too much time than it should:

    – VS 2010 Professional
    – .NET Framework 4
    – KB2468871 for .NET Framework 4
    – Windows SDK 7.1
    – SP1 for VS 2010
    – KB2519277

    I also had to run “NGEN.exe update” after installing .NET Framework 4, KB2468871, the SDK and SP1 for VS 2010 — that’s four times, by the way.
    Should I have to reinstall them for some reason, I will have to do the same set of procedure manually and spend pretty much the same amount of time again, as there’s no way to integrate them into a single package or at least to automate the procedure.I find it ironic that a company that debuted as a development tool vendor can’t make the process of installing its development tools on its operating system efficient.

  • Dr Eel

    “…cumputing heaven.”

    Been watching prons much?

  • Marcusgy

    Here’s another data point for what it’s worth.

    We developed on Windows and the Java project took about 1.5 minutes to clean and build in NetBeans. After switching to Linux that dropped to 15 seconds. This is on the same machine. In this case it was Windows Vista (years ago) and Ubuntu 10.04.

    Always thought it was the better filesystem and better caching by the OS.

  • Grunwald

    I had this experience when porting Solaris code to windows. But since some time constraints had to be met, i had to investigate further. After i while i had a library, which timed every io-call. On a fragmented disk i had file lookups which needed more than 10 Seconds (Yes ! A single file). This was done to get the file time. Since this is needed for every version control and rebuild code, all you can do is defragment often and early.

    I suspect NTFS to use the Btree lookup on fragmented storage, which is not going to work. BTree meets the timing constraints only, if you have O(1) for a node lookup.

  • http://www.prrankchecker.com/ pr check

    The SERP rank of a web page is not only a function of its PageRank, but
    depends on a relatively large and continuously adjusted set of factors
    (over 200),commonly referred to by internet marketers as “Google Love”. Search engine optimization (SEO) is aimed at achieving the highest possible SERP rank for a website or a set of web pages.

  • http://www.prrankchecker.com/ pr check

    A similar new use of PageRank is to rank academic doctoral programs
    based on their records of placing their graduates in faculty positions.
    In PageRank terms, academic departments link to each other by hiring
    their faculty from each other (and from themselves)