64-bit Visual Studio -- the "pro 64" argument

 

[I don't have to go out on a limb to acknowledge that is the worst article I've ever written.  I wrote it in the wee hours one morning in a rotten mood and it shows.  There are far too many absolutes that should have been qualified and the writing style is too aggressive for no good reason.  I'm not taking it down because there are worthy comments, and I refuse to try to pretend it never happened.  But I absolutely regret writing this article in this way.  If you choose to read this, use a large sanity filter and look at some of the comments and the follow-up for qualifications to help see what I'm getting at.]

[This article is in response to an earlier posting which you can find here]

I’ve been reading some of the commentary on my post about 64-bit Visual Studio which is really about 64-bit vs. 32-bit generally using Visual Studio as an example and I have to say that for the most part, I’m pretty disappointed with the arguments being put forth in favor of 64-bits.

[Some less than charitable and totally unnecessary text removed.  I blame myself for writing this at 2:30am.  It was supposed to be humorous but it wasn't.]

There is an argument to be made here, but there is also a great deal of ignoring of the real issue going on here.

Let’s actually go about doing the job of properly attacking my position the way I think it should be attacked, shall we?

I start with some incontrovertible facts. Don’t waste your time trying to refute them, you can’t refute facts. You can have your own opinion, but can’t have your own facts.

The relevant facts are these:

-the same algorithm coded in 64-bits is bigger than it would be coded in 32-bits

-the same data coded for 64-bits is bigger than it would be coded in 32-bits

-when you run the same code, but bigger encoding, over the same data, but bigger encoding, on the same processor, things go slower

-any work I can possibly do has an opportunity cost which will mean there is some other work I can’t do

All righty, it’s hard to argue with those.

Now let’s talk about the basis I use for evaluation.

-I get points for creating a great customer experience

-I get no points for using technology X, only for the experience, using fewer technologies for the same experience is better than using more

-I get no points for using more memory, not even enabling the use of more memory, only for the experience, using less memory for the same experience is better than using more

OK, so in short, I begin with “64-bits gets no free inherent value, it has to justify itself with Actual Benefits like everything else.”

We cannot make a compelling argument with fallacies like “32 bits was better than 16 therefore 64 must be better than 32”, nor will we get anywhere with “you’re obviously a short-sighted moron.”

But maybe there is something to learn from the past, and what’s happened over the last 6 years since I first started writing about this.

For Visual Studio in particular, it has been the case since ~2008 that you could create VS extensions that were 64-bits and integrate them into VS such that your extension could use as much memory as it wanted to (Multi-process, hybrid-process VS has been a thing for a long time). You would think that would silence any objections right there -- anyone who benefits from 64-bits can be 64-bits and anyone who doesn’t need 64-bits can stay 32-bits. It’s perfect, right?

Well, actually things are subtler than that.

I could try to make the case that the fact that there are so few 64-bit extensions to VS is proof positive that they just aren’t needed. After all, it’s been nearly 8 years, there should be an abundance of them. There isn’t an abundance, so, obviously, they’re not that good, because capitalism.

Well, actually, I think that argument has it exactly backwards, and leads to the undoing of the points I made in the first place.

The argument is that perhaps it’s just too darn hard to write the hybrid extensions. And likewise, perhaps it’s too darn hard to write “good” extensions in 32-bits that use memory smartly and page mostly from the disk. Or maybe not even hard but let’s say inefficient –from either an opportunity cost perspective or from a processor efficiency perspective; and here an analogy to the 16-bit to 32-bit transition might prove useful.

It was certainly the case that with a big disk and swappable memory sections any program you could write in 32-bit addressing could have been created in 16-bit (especially that crazy x86 segment stuff). But would you get good code if you did so? And would you experience extraordinary engineering costs doing so? Were you basically fighting your hardware most of the time trying to get it to do meaningful stuff? It was certainly that case that people came up with really cool ways to solve some problems very economically because they had memory pressure and economic motivation to do so. Those were great inventions. But at some point it got kind of crazy. The kind of 16-bit code you had to write to get the job done was just plain ugly.

And here’s where my assumptions break down. In those cases, it’s *not* the same code. The 16-bit code was slow ugly crapola working around memory limits in horrible ways and the 32-bit code was nice and clean and directly did what it needed to do with a superior algorithm. Because of this, the observation that the same code runs slower when it’s encoded bigger was irrelevant. It wasn’t the same code! And we all know that a superior algorithm that uses more memory can (and often does) outperform an inferior algorithm that’s more economical in terms of memory or code size.

Do we have a dearth of 64-bit extensions because it’s too hard to write them in the hybrid model?

Would we actually gain performance because we wouldn’t have to waste time writing tricky algorithms to squeeze every byte into our 4G address space?

I don’t have the answer to those questions. In 2009 my thinking was that for the foreseeable future, the opportunity cost of going to 64-bits was too high compared to the inherent benefits. Now it’s 2016, not quite 7 years since I first came to that conclusion. Is that still the case?

Even in 2009 I wanted to start investing in creating a portable 64-bit shell* for VS because I figured the costs would tip at some point. 

I don’t work on Visual Studio now, I don’t know what they’re thinking about all this.

If there’s a reason to make the change now, I think I’ve outlined it above. 

What I can say is that even in 2016, the choice doesn’t look obvious to me. The case for economy is still strong. And few extensions are doing unnatural things because of their instruction set – smart/economical use of memory is not unnatural. It’s just smart.

*the "Shell" is the name we give to the core of VS (what you get with no extensions, which is nearly nothing, plus those few extensions that are so indispensable that you can't even call it VS if you don't have them, like solutions support -- that's an extension]