Suppose I give my developers a screaming fast machine. WPF-based VS2010 loads very quickly. The developer then creates a WPF or WPF/e application that runs fine on his box, but much slower in the real world.
This question has two parts...
1) If I give a developer a slower machine, does that mean that the resulting code may be faster or more efficient?
2) What can I do to give my developers a fast IDE experience, while giving 'typical' runtime experiences?
For the record, I'm preparing my even-handed response to management. This isn't my idea, and you folks are helping me correct the misguided requests of my client. Thanks for giving me more ammunition, and references to where and when to approach this. I've +1'ed valid use cases such as:
- specific server side programming optimizations
- test labs
- the possibly buying a better server instead of top of the line graphics cards
The answer is (I'll be bold and say) always
Develop on the best you can get with your budget, and test on the min-max spec range of equipment you'll deploy to.
There's emulators, virtual machines, actual machines with testers that can all test performance to see if it's a factor.
Very, very unlikely.No, and your developers may put something nasty in your coffee for suggesting it. Time your developers spend waiting for the code to compile or for the IDE to do whatever it's doing or whatever is time they're not spending making the code better. Disrupts their mental flow, too. Keep their minds on the problem, and they'll be much more efficient solving that problem.
2) Give them each a second PC representing the lowest specs you want them to actually support, with a KVM switch to go between the that and their real workstation.
This is a terrible idea. You want your developers to be as productive as possible, which means giving them as fast a machine as possible, so they don't sit around all day waiting for things to compile. (Slightly OT, but it also helps not to block their access to potentially helpful sites with WebSense and the like.) If you are constrained by having users who are still running Stone-Age technology, then you'll need to have a test machine with similar specs, and be sure to test early and often to make sure that you aren't going down the wrong road in terms of technology choices.
Point 1, NO! Studio is meant to be run on decent machines and that requirement has only become more powerful with each version. You can actually lock some versions of studio if you turn intellisense on and use a single core non HT box.
To point #2 there are some features in the testing projects that allow you to throttle some resources. They are not perfect, but they are there. VPC or low spec VM images to a pretty good job of being constrained as well. I have had users sit down at bad machines to do testing occasionally so that they can see the implications of the features they have requested.
It results in a bunch of bitchin' developers. This stuff is hard enough as it is, let's not make the experience worse.
I would encourge you to have similar hardware to your users in a Test or QA environment to smoke out any performance issues however. That's a good idea.
That programmers sitting on slow hardware would write faster applications is equivalent to arguing that race car engineers equipped with crappy tools would make faster vehicles.
This is an interesting thought (giving devs a slow machine may lead them to optimize more).
However, the solution is framed in a better way - put the response time in the requirements for programs, and have a low-end machine available for testing.
Also, if you have a really whiz-bang compiler/language, it might be able to devise different ways to generate code and pick the best one. That would only be helped by a faster computer.
Others have responded that generally you want developers to have fast machines. I agree. Do not skip on RAM - you want as much in memory as you can - some build processes are very heavy on disk usage.
Things you might want to consider getting rid of is antivirus on build drives! That only slows down and can be an extremely strong slowing down factor.
You may want to let the developes develop on Linux if possible. The tools there are much better for all kinds of extra tasks (just grep for something in a file, etc). This also gets rid of the anti-virus.
Nope - in fact it would result in more bugs because they won't do as much testing, and they won't use extra tools like profilers as much. Give them the best machines you can afford (including graphics acceleration hardware if you're a game development or graphics shop), and have them test inside VMs. The VM specs can be scaled up or down as needed.
I think this is an interesting question, and I wouldn't go for a "no" that quickly. My opinion is: it depends on what kind of developing team we are talking about. Example: if you are leading a group that's running for the annual ICFP programming contest, maybe having good results after a small amount of developing time on a HPC cluster wouldn't necessarily mean that the solution you found is good. The same can be said if you are writing some scientific or numerical algorithm: on your old AMD Duron 600 MHz with 64 MB of memory you're forced to be careful about the way you are getting things done, and this may affect even some design choices.
On the other hand, a smart programmer/scientist/whatever is supposed to be careful anyways. But I found myself writing down some of my best codes when I had NO computer AT ALL and had to take notes on paper. This may not apply for big projects involving huge frameworks, when an IDE is strictly necessary.
One thing is sure: fast machines and good immediate results make (bad) programmers spoiled and may be one of the reasons for some of the crap we find on computers.
Development should be done in the best environment that is feasible. Testing should be done in the worst environment that is feasible.
I'll buck the norm and say yes IF AND ONLY if they're writing server software. Laugh all you want, but the most efficient team I ever saw was a group of Perl guys with Wyse terminals. This was late 1990s, was a University off-shoot shop, and they were writing spatial gridding software (which basically just calculates). They were however talking to some relatively powerful late-model RS/6000s.
Just to add interest to the event, there was a blind programmer there. I was thoroughly impressed.
THIS IS THE MOST DISGUSTING THING I HAVE EVER READ... I think giving your developer a 2 leged stool to sit on would have the same desired effect... "he will work for you, and when you are not looking, seek other employment"
It's also true that managers should conduct all meetings in Pig-Latin. It improves their communication skills overall to have them disadvantaged when speaking simple sentences. They'll have to rely more on facial expressions and body language to get their point across and we all know that is at least 70% of all communication anyways.
CFOs should use only an abacus and chalk. Otherwise they end up relying too much on 'raw data' and not enough on their 'gut feel'.
And Vice Presidents and higher should be required to hold all important business meetings in distracting settings like golf courses while semi-intoxicated. Oh snap...
Embedded systems programmers run into this all the time! And there's a two part solution:
Then it won't matter what hardware your developers work on.
Once you've done that, let's say faster equipment can save your programmers a half-hour a day, or 125 hours in a year. And let's say they cost $100,000 a year with benefits and overhead (ridiculously low for Silicon Valley), or $50 an hour. That 125 hours * $50/hour is $6250. So if you spend anything less than $6250 a year on rockin' development hardware per programmer, you're saving money.
That's what you should tell your management.
Tim Williscroft pretty much said the first half of this in a comment, and in a just world, he would get half of any points this answer gets.
Added Oct. 24:
My ex-employer had that theory, and it helped them piss away about $100 million.
They're a Japanese-based conglomerate that was used to hiring programmers in Japan, Korea and China. Folks there are cool with using crappy development hardware, 13-hour work days, sleeping at their desks, and not having a life. So they figured when they acquired a noted Silicon Valley company to do a Linux-based cell phone OS, those silly Californians who wanted modern gear were just whiny prima-donnas and didn't actually have a good reason for it (like productivity).
Four years later, the OS worked like crap, all the schedules were blown, and the customers were pissed off and terminating contracts right and left. Finally, the OS project was cancelled, and a large percentage of the conglomerate's worldwide workforce was laid off over the last year. And frankly, I wouldn't want to have been one of the executives who had to explain to the stockholders where all that money and effort went.
It wasn't just the slow development machines that caused this fiasco. There were a lot of other strategic and tactical blunders - but they were that same kind of thing where the people working in the trenches could see the train wreck coming, and wondered why the decision-makers couldn't.
And the slow gear was certainly a factor. After all, if you're under the gun to deliver on time, is it really a smart thing to deliberately slow down the work?
My Macbook Pro at work is a few years old. Between Linux and Windows(to test IE quirks) vms as well as couple of web browsers and terminals open, the OSX spinning wheel shows up a lot. Guess what I do when it spins, I sit and wait. In this case, a slow machine does slow productivity.
I work on a package that takes about an hour to build on my 8 core 8G machine (full clean build). I also have a relatively low end laptop I test on. The low end laptop doesn't manage two full builds during a single work day.
Am I more productive on the fast machine with some deliberate testing done on the laptop, or should I do all my builds on the laptop?
Keep in mind these are not made up numbers.
It is a rigged demo in that I don't normally need to do a clean build every day (I can do a lot of testing on single modules), but even the partial builds show roughly an order of magnitude difference in compile/link times.
So the real issue is on my slower machine a typical build is long enough for me to go get a cup of coffee, while on my faster machine I can only sip a little coffee.
From a point of view of getting work done I prefer doing development on a fast machine. I can far more reliably hit deadlines. On the other hand I imagine if management made me do development on my slow machine I would get a lot more web browsing done, or at least book reading.
If I was given a slow machine I'd spend my day optimising the development process and not optimising my delivered code. So: NO!
1) If I give a developer a slower machine, does that mean that the resulting code may be faster or more efficient?
We have been building software for the last 6 decades, and we still get questions like these? Seems more like yet another attempt at cutting corners. No offense, but c'mon, do you think the question is even logical? Think about it in these terms (if you can): you want to build a 4x4 vehicle that can operate under harsh conditions, rain, mud, whatever. Are you going to put your engineers and assembly line under the elements just to make sure the resulting vehicle can operate on them?
I mean, Jesus Christ! There is development and there is testing. Testing is done in a different, harsher environment, or the developer knows how to assemble a test-bed in his own dev environment in a manner suitable for stress testing. If he can't, replace him with a better developer.
2) What can I do to give my developers a fast IDE experience, while giving 'typical' runtime experiences?
You should be asking that to your developers. And if they can't give you an objective and valid answer, you need to replace them with actual developers.
But to entertain the question, give your developers (assuming you have good developers), good tools and good hardware, the best you can afford. Then set up a lowest common baseline environment in which your software must operate. That's where testing should occur. It is much better engineering practice to have a test environment that is distinct from the development environment (preferably one that allows you do to stress testing.)
If your developers are any good, they should have communicated this to you (assuming you have asked them.)
In programming, there is an old saying that "premature optimization is the root of all evil". I think you have managed to successfully create another "root" (or at least first branch) of all evil. From now on, we can say "premature developer deoptimization is the root of all evil."
In short, the answer is that this will only slow up your development time and make further maintenance more difficult. Compile times will take longer, searching for code on disk will go slower, finding answers online will take longer, and MOST importantly, developers will start to use prematurely optimize their code in order to even to be able to test the needed functionality.
That last point is the most critical issue and isn't brought up in many of the other answers. You may get your first version out ok, but then when you want to update the code in the future, you will find that the developers premature optimization took the focus of your code away from good design and pushed it closer to "gotta make this at least work to keep my job" style of code. Adding additional features will become more difficult because the optimizations chosen at the time may be unneeded and lock your code into a path of semi-optimized hacks on top of other semi-optimized hacks.
As an example of this, imagine that your current version's minimum system requirement is a single processor machine of somewhat slow speed. You place developers on this box and they come up with a intricate single threaded solution that relies on a lot of hacks because they wanted to develop the product quickly. Now 5 years later, you have a new version of the product that has a minimum requirement of a dual processor machine. You would like to be able to cleanly separate out parts of the program that you can run in parallel but the decision you made 5 years ago that forced your developers to make a hacky software now prevents you from using the full power of your new minimum requirement.
What you should do is to add a phase at the end of your development cycle where you do acceptance testing on the lower bound boxes. Certainly some of the code will be too slow because of the developer's faster machine but you can isolate that part and optimize it there. The rest of your code stays clean and maintainable.
I see your question as saying, "Can I force my developers to optimize early by giving them poor developer machines yet still get good code?" And the answer is no.
This is not a bad idea - but you want your developers to have a speedy programming environment.
You could possibly implement this by giving your programmers two machines - a fast dev box, and a slower commodity box (possibly virtual) for testing.
Some tweaking of the VS build process could make deployment to the test box the norm, with remote debugging.
There are other ways to consider forcing your coders to develop more efficient code - you can include performance and memory-use goals in your unit tests, for example. Setting budgets for memory use is an excelent goal as well. Also setting page-weight budgets for html code.
Interesting reading, all those answers.
But I think most people answering here are missing the point. The question, as I read it is not (only at least) about really giving the developers a P1 to make faster code.
The point is that a lot of software today is just as slow or even slower than the seftware we used back in last millennium in spite of very much more powerful computers. Judging from the answers here most developers don't get that hint. This is very obvious in web applications. This site is a very good exception, but many sites are having a front page in 1 mb. What do I get for waiting for that to download? I don't know. I think it seems to be about an ignorance from the developer not respecting the time the user need to spend on it, or even worse pay for if you pay per mb. The thing is that all those web pages is not even containing high resolution pictures. Often it is just some crap code delivered from some development-environment. Well, of course it is not crap code I guess, but it gives no gain to me as user.
In general it is not only about optimizing the code, but just as much about choosing to not include things slowing down more than the it gives.
A few weeks ago I started a laptop from 1995. Windows 3.x was up and running in no time. The database I should get some data from started before the enter key was fully released (almost at least).
I know that we get a lot more from our software today, but we also have computers many times faster. Why doesn't the development industry decide to keep the speed of software from 1995 and make people buy new hardware because they want new functionality. Today it is more like the everyday-programs and web sites forces people to buy new hardware to do exactly the same things as they did earlier. But of course in a fancier way.
I have to say I think the Linux development seems to handle this better. Linux distributions has for many years been quite far ahead windows even in fanciness with many eye candy things like animated windows. The thing is that they have in spite of that worked on the computers of today and even yesterday. Not only on cutting edge hardware.
By now I guess many developers have an unhealthy level of adrenalin. Yes, I found a way to give back some frustration from all waiting in front of:
office sql server (starting up management console) arcgis (starting up and using) acrobat reader (starting up) agresso (using, at least as web application) windows (staring and using, well I haven't tried 7 yet) .net web pages (downloading)
and so on
I feel good :-)
For many applications the issue is getting developers to test with real world data sets before they are "done." For interactive applications, a baseline test machine/VM would be required.
Interestingly, I worked at a startup where we ended up doing this. I think it actually worked pretty well, but only because of the specific situation we were in. It was a mod_perl shop where class auto-reloading actually worked correctly. All the developers used vim as their IDE of choice (or used some remote editing software). The end result was that very little (if any) time was lost waiting for code to compile/reload/etc.
Basically, I like this idea IFF there is a negligible impact on the development cycle for all developers, and it only impacts runtime operation of your code. If your code is in anyway compiled, preprocessed, etc, then you are adding time to "fix bug; test; next" loop that developers are working in.
From the interpersonal side, people were never forced to use the slow servers, but if you used the slow servers, you didn't have to do any of your own maintenance or setup. Also, this setup existed from the very beginning, I can't imagine trying to sell this to an established development team.
After rereading your original question, it occurs to me that one thing that perpetually terrifies me is development environments that differ from production environments. Why not use a VM for code execution that you can cripple for runtime without affecting the dev workstation? Lately, I've been using/loving VirtualBox.
I'm going to buck the trend here too.
Anecdote: I worked for a Dutch software development firm that upgraded 286 computers to 486-es (yes, I'm that old). Within weeks the performance of all of our in-house libraries dropped by 50% and bugs increased... A little research showed that people no longer thought through the code itself during the debugging process, but resorted to 'quick' successive code -> compile -> test -> fix (code) etc. cycles.
Related: when I started a subsidiary for that same company in the USA, I ended up hiring Russian programmers because they were used to PCs with fewer features/less power and were much more efficient coders.
I realize these were different times, and resources were much more scarce than they are today, but it never ceases to amaze me how, with all the progress that's been made on the hardware front, the net result seems to be that every step forward is negated by sloppier programming requiring higher minimum specs...
Hence... I feel programmers should be forced to test their applications on machines that do not exceed the 'Average Joe' computing power and hardware specs.
The problem isn't the developer building inefficient code on a fast machine, the problem is that you haven't defined performance metrics that must be measured against.
There should be defined, as part of the product requirements, a specific target that can be measured on all computers based off of the required customer experience. There are many websites (Check SpecInt) that allow you to relate your computer to other types of computers.
This is good for many reasons. It allows you to define minimum supported hardware more easily so you can limit the number of customer complains - we all know most software runs on most computers, it's just a matter of performance - if we set our specs so that people in the minimum requirements range has reasonably acceptable performance, you limit customer complaints - and then when a customer calls in, you can use the benchmarks to determine if there really is an issue, or if the customer is just not happy with how the product is supposed to work.
The run-time speed on developer machine is so irrelevant, unless you want to revenge or punish your developer for writing slow code and for ignorance of target deployment environment.
As the manager, you should make sure the developers knows the objective of the project and always ensure they are on track. About the target machine issue we are discussing, it could be prevented by early and frequently testing on slow machine, not by giving them slow machine to use and be suffering.
The slow run-time speed also slow down development, as most programmers are using code-and-test method. If the run-time is slow, their task will be slow too.
I like long compile times. It gives me more time to work on my resume.
I am convinced that having slower computer for development results in faster code, but this comes at a price. The rationale is that I have experienced this first hand: having long commute time, I bought a netbook to work in the train, netbook which is slower than any computer I have bought in the 5 last years. Because everything is so slow, I see very quickly when something is unbearably slow on this netbook, and I am aware of slow spots much more quickly (no need to benchmark all the time). Working on a netbook really changed how I developed.
That being said, I am not advocating doing this, especially in a profesional environment. First, it is demoralizing. The very fact that almost everybody said the idea did not even make sense show that programmers react badly to the idea.
Secondly, having everything slower means that things you may want to do on a fast machine (takes say 1 minute) are not really doable anymore on a slow machine, because of lazyness, etc... It is a question of incentive.
Finally: the produced code may be faster, but it almost certainly takes longer to produce.
Hardware is less costly than time of development.
Most bottlenecks are in the database not in the client PC, but that doesn't excuse testing on slower machines than the developer. Use testing tools to test optimization.
Yes, of course! And making them working using only a sheet of paper and a pencil will effect in ever more efficient code (and obviously more portable). Of course, only if pencils aren't too sharp.
I can only imagine the profile experience while using a slow machine. Yikes.
In short. Hell No.
Also have at least 4gb of ram so you can have 2gb on your main machine, 1 for a VM and the other 1 for the extra memory the VM needs and for you to have memory leeway.
Also two processors are a must so if an app locks/eats CPU up the developer doesnt have to painfully way to ctrl-alt-del something.
The answer lies in the middle.
Have one fast box to run the dev environment (eg Eclipse)
And another slow box for testing the output. This is especially important for web apps.
Side-by-side screens, one for each box.
If the code is acceptable on the output box, it will be more than acceptable for most users.
Fast dev boxes make programmers lazy. For example, searching for an element in the DOM every time it's needed. Find it once and cache the result.
You'll really notice the difference on a slow box running IE 6....
Ask the client if they would create more efficient business processes using slow PCs.
This theory is simple-minded and outdated. It was true back in the days.
I remember spending more time microoptimizing my Turbo Pascal stuff on my pre-Pentium computer. It just made sense before Y2K, much less ever since. Nowadays you don't optimize for 10 year old hardware. It's sufficient to testrun software to find bottlenecks. But as everyone here agress, this doesn't mean developer (and thus optimization) productivy correlates to giving them outdated hardware for development.
Asking programmers whether programmers should get good hardware is like asking a fat man whether he likes food. I know this is the subjective exchange, but still ... is the question worth asking us? :P
That said I of course agree with the majority: NO.
I'm tempted to say "No" categorically, but let me share a recent experience: Someone on our project was working on some code to import data into the database. At the time he had the oldest PC in our group, maybe even the entire organization. It worked fine with VS 2008, but of course a faster machine would have made the experience better. Anyway, at one point the process he was writing bombed while testing (and that's before it was fully-featured). He ran out of memory. The process also took several hours to execute before it bombed. Keep in mind, as far as we knew, this is what the users would have had to use.
He asked for more RAM. They refused, since he was getting a newer machine in 3-4 weeks and the old one was going to be discarded.
Keep in mind that this guy's philosophy on optimization is: "We have fast machines with lots of RAM" (his and a few machines excluded, anyway), so why waste valuable programmer time optimizing? But the situation forced him to change the algorithm to be more memory-efficient so that it would run on his 2Gb machine (running XP.) A side-effect of the rewrite is that the process also ran much, much faster than it did before. Also the original version would eventually have bombed even with 4Gb when more data was being imported - it was a memory hog, plain and simple.
Soooo... While generally I'd say "No", this is a case where the developer having a less powerful machine resulted in a better optimized module, and the users will benefit as a result (since it's not a process that needs to be run very often, he initially had no intention of optimizing it either way, so they would have been stuck with the original version if the machine had had enough RAM to run a few large tests...) I can see his point, but personally I don't like the idea of users having to wait 8 hours for a process to complete, when it can run in a fraction of that time.
With that said, as a general rule programmers should have powerful machines because most development is quite intensive. However, great care should be taken to ensure that testing is done on "lowest common denominator" machines to make sure that the process doesn't bomb and that the users won't be watching paint dry all day long. But this has been said already. :)
In reading the question, and the answers, I'm kind of stunned by the vehemence of the NO case.
I've worked in software development for 25 years now, and I can say without any hesitation that programmers need a bunch of things to develop good code:
A REASONABLE development environment. Not dinosaur. Neither does it need to be bleeding edge. Good enough not to be frustrating.
A good specification (how much is done with NO written specification?)
Good and supportive management.
A sensible development schedule.
A good understanding of the users AND THE ENVIRONMENT the users will have.
Further, on this last point, developers need to be in the mindset of what the users will use. If the users have supercomputers and are doing atom-splitting simulations or something where performance costs a lot of money, and the calculations run for many hours, then thinking performance counts.
If the users have 286 steam powered laptops then developing and having developers do their development test on the latest 47 GHz Core i9000 is going to lead to some problems.
Those who say "give developers the best and TEST it" are partly right but this has a big MENTAL problem for the developers. They have no appreciation of the user experience until its too late - when testing fails.
When testing fails - architectures have been committed to, management have had promises made, lots of money has been spent, and then it turns into a disaster.
Developers need to think like, understand, and be in the zone of the user experience from day 1.
Those who cry "oh no it does not work like that" are talking out their whatsit. I've seen this happen, many times. The developers usual response is one of "well tell the CUSTOMERS to buy a better computer", which is effectively blaming the customer. Not good enough.
So this means that you have several problems:
Keep the devs happy and piss of the management, increase the chances of the project failing.
Use slower machines for development, with the risk of upsetting the devs, but keeping them focussed on what really matters.
Put 2 machines on the devs desk AND FORCE THEM TO TEST ON THE CLUNKER (which they wont do because it is beneath contempt.... but at least its very clear then if there are performance problems in test).
Remember batch systems and punch cards? People waited an hour or a day for turnaround. Stuff got done.
Remember old unix systems with 5 MHz processors? Things got done.
Techo-geeks love chasing the bleeding edge. This encourages tinkering, not thinking. Something I've had arguments about with more juniour developers over the years.... when I urge them to get fingers away from the keyboard and spend more time reading the code and thinking.
In development of code, there is no substitute for thinking.
In this case, my feeling is - figure out WHAT REALLY MATTERS. Success of the project? Is this a company making / killing exercise? If it is, you can't afford to fail. You can't afford to blow money on things that fail in test. Because test is too late in the development cycle, the impacts of failure are found too late.
[A bug found in test costs about 10x as much to fix as a bug found by a dev during development.
And a bug found in test costs about 100x as much to fix as that bug being designed out during the architectural design phase.]
If this is not a deal breaker, and you have time and money to burn, then use the bleeding edge development environment, and suffer the hell of test failures. Otherwise, find another way. Lower end h/w, or 2 machines on each desk.
Absolutely not. Give your Programmers the best laptop money can buy, a keyboard of their choice, multiple great big screens, a private office, no phone, free soft drinks, all the books they want (that are relevent), annual trips to key tech conferences and you'll get great results. Then test on upper and lower boundary hardware/software/browser/bandwidth combinations.
1) If I give a developer a slower machine, does that mean that the resulting code may be faster or more efficient?
No. Good Developers are spoiled. If they see they get bad tools at your company, they will go work somewhere else. (Good developers usually have the choice to go someplace else)
Boy I'll get clobbered for this, but there's something people don't want to hear:
Nature abhors a vacuum.
Of course programmers want faster machines (me included), and some will threaten to quit if they don't get it. However:
If there's more cycles to be taken, then they get taken.
If there's more disk or RAM to fill up, it gets filled up.
If the compiler can compile more code in the same time, then more code will be given to it.
If it is assumed that the extra cycles, storage, and code all serve to further gratify the end user, one may be permitted to doubt.
As far as performance tuning goes, just as people put in logic bugs when they program, they also put in performance bugs. The difference is, they take out the logic bugs, but not the performance bugs, if their machine is so fast they don't notice.
So, there can be happy users, or happy developers, but it's hard to have both.
Isn't the answer to this question a resounding "NO" independent of whomever you ask?
Ask your graphic artists if they should be given a slower machine.
Ask your writers if they would choose a slower machine over a faster one.
Ask your administrative assistants whether they would prefer a slower or faster machine.
All of them will say they'll be more productive with a faster machine.
I say developers need the best development system available - but that doesn't necessarily mean the fastest. It may well mean a modern but relatively slow system with all-passive cooling, to keep noise to a minimum, for example.
One thing - a development system should be reasonably new, and should absolutely have multiple cores.
An old PC may sound attractive in a show-performance-issues-early kind of way, but a Pentium 4, for example, may actually be faster (per core) than some current chips. What that means is that by limiting a developer to using a P4 system (actually what I'm using now - though that's my personal budgeting issue)...
There are also issues with PCs that are too limited to support virtual machines well - e.g. for testing in multiple platforms.
Let's go against the flow here: YES. Or at least that's been the general wisdom in the industry for decades (except of course among developers, who always get angry when they aren't treated like royalty and get the latest gadgets and computers).
Of course there's a point where reducing the developer's machine will become detrimental to his work performance, as it becomes too slow to run the applications he needs to run to get his job done. But that point is a long way down the line from a $10000+ computer with 6GB RAM, 2 4GB videocards, a high end soundcard, 4 screens, etc. etc.
I've on the job never had a high end machine, and it's never slowed me down considerably as long as it was decent (and the few real sub-standard machines were quickly replaced when I showed how they slowed me down).