This is a copy of a small rant I just posted on the ATG_Tech Google Group.
Please note that ATG isn’t the only company doing this, Oracle does it, as do many others. I just think that it’s wrong:)
If you draw a graph showing processing power against software license cost for the same software module, over time, you’d see a steady increase of processing power, a pretty flat cost line for years and years, and then once multi-core hit the server market, you’d see a huge jump in cost, without a significant change in the climb of performance.
——–
I think this licensing model is a huge mistake for the customers.
CPU manufacturers changed course from developing faster and faster chips, to developing more and more cores on a given chip at lower clock speeds. The reason is that it’s easier, cost-wise and silicon
manufacturing yield-wise, to add cores, and rely on the OS and applications to make use of the multi-cores. So ideally, the end user sitting in front of their computer will see a similar level of
performance increase as chips go wider, as they would have had chip manufacturer continued the megahertz wars. While at the same time, the cost of that increased performance to the chip manufacturers is less. (also there was an approaching barrier of how low you can shrink the die size without moving to a whole other base material, and power dissipation issues).
Intel released their 2.2 GHz Pentium in January of 2002. Current Intel dual and quad core processors don’t really exceed 3.0 GHz, and many new chips are still being released in 2.2, 2.4, 2.6 GHz core
speeds. So in 6 years, based on an 18-month Moore’s Law cycle (yes, I know Moore’s Law is about transistor density not computation speed, but for the sake of estimation, it’s pretty close to how the industry was progressing with clock speed before the shift to multi-cores), in the alternate universe of single core chips, we’d expect to see 35.2 GHz chips. With a 24 months cycle it would be 17.6 GHz. At least I think that’s how the math works. At any rate, our current multi-core processors don’t provide any additional performance based on the performance per CPU we should expect at the current time, based on the history of CPU performance increases.
The problem here is that now ATG (and other per-core licensing products) customer are paying up to 2X more money in licensing cost for very similar (if not the same) levels of performance than if they CPUs had just gotten faster.
You could make the case that for the work of handing request threads, two 3.0 GHz cores perform a bit better than a single 6.0 GHz (or 17.4 GHz) core, but honestly it’s really hard to say, since we don’t have 6.0 GHz cores readily available to test ATG on. I’d be VERY surprised if the performance was more than 10% different. And yet we have to pay far more for it.
Customers of ATG/Oracle/etc… are being penalized for the (legitimate) decisions of Intel and AMD.
——-
Leave a Reply