Triopoly

After a longish hiatus from blogging. I’m back!

It seems either too difficult or way too easy to analyse the current situation of business dynamics of computer semiconductor manufacturers, namely: Intel, AMD and nVidia.

vs. vs.

Let’s see how it all started…

AMD, second highest desktop CPU market share holder and performance leader till first half of 2006, bought ATI, the highest GPU market share holder but not the performance leader, in late 2006. It forced AMD to directly compete both Intel and nVidia, chipzilla and grahpzilla in theInquirer.net lingo. It yielded only shame for AMD till the end of year 2007. During this time, AMD lost both its CPU performance leader crown and highest CPU market share crown. AMD reportedly bought ATI for some $5 billion. By the end of year 2007, the net worth of AMD, previous AMD and ATI combined, was $3.83 billion including its cash in the bank. But AMD is AMD. It bounces back after all hope has faded. It’s managers also boyishly rejoice unlimitedly the little pleasures they may have, once in 5 years, for beating Intel in performance numbers.

Intel, the CPU performance and market leader, bought Havok, the software physics game engine market leader. This gave Intel a killer app solution for its multi-core CPU fanaticism. Intel immediately killed the Havok’s plan to release an SDK, HavokFX, which was supposed to run on conventional GPUs for bringing improved and detailed physics effects in games through its physics engine. A sad point in time for all the gamers out there who were drooling for making better use of their graphics card other than just, well, graphics rendering.

Most interesting of these acquisitions was that of nVidia buying Ageia, world’s only hardware physics manufacturer and the second highest software physics game engine market holder. This brought it into some competition with Intel due to the Havok deal. The real tragedy, in fact the tragedy of tragedies happened when nVidia spokesperson announced that they will not produce hardware physics processors, more generally dubbed as PPUs. [Since this is reported by fudzilla.com and considering fudo’s rumor mill I would want to think that nVidia is still strategizing about Ageia and nothing about discontinuation of PPU has been announced.]
However, in contrast to Anandtech.com analysis, I believe that nVidia, being a semiconductor hardware manufacturer, needs to diversify its hardware products portfolio by introducing whatever they have in their arsenal. They can prove to be a monopoly in PPU market if they put their marketing muscle and game developer relationships into Ageia offering. They have nothing to lose but everything to gain.

Now a new broth is about to be cooked regarding nVidia’s acquisition and/or possible merger of VIA.
Why?
VIA is the only third x86-64 CPU manufacturer and their CPU designs come from their US division Centaur, previously producing Cyrix CPU. Since buying VIA will not transfer x86 license from VIA to nVidia according to VIA’s deal with Intel therefore nVidia and VIA might have to become partners in crime. This brings nVidia in league with Intel and AMD as premium semiconductor manufacturer.

Apart from acquisition, Intel has announced multiple times that it is working on a project code-named Larrabee. This might turn out to be a CPU+dedicated GPU and/or CPU+GPU multi-core design. Only time will tell what it turns out to be.

We all know that Intel, nVidia and AMD (ex-ATI) already manufacture motherboard chipsets. Considering the above strategies, it seems that all the three BIGG chippery firms are going to compete fiercely in CPU, GPU, chipsets and PPU domain. Only AMD will miss out the opportunity of PPU or software physics engine offering. However, nVidia seems to be open in bringing its Ageia software SDK to work on both nVidia and AMD GPU platforms thus bringing acceleration of physics on AMD’s hardware.

2010 will be an exciting year when the three semiconductor giants will have their full arsenal armed and ready. By 2012, I expect that one of these three giants will lose the battle and will be bought by the other. This is a pure speculation based on the fact that “nature has never allowed triopolies to settle peacefully”.


Share

Advertisements

Ruby and threading for multi-cores

With the CPU manufacturers’ focus on multi-core architecture rather than GHz boost, a lot of debate and research has started to scale existing software applications to take advantage of this development.

Let’s consider Moore’s Law for a moment. This is not really a mathematical law but just a rule of thumb that CPU manufacturers aka Intel & AMD follow. Therefore the software industry expecting that much growth in processing power X months down the road always optimize their software to match the performance of the hardware at the time when their solution will be launched. This growth pattern is followed by all game developers, DBMS vendors, corporate solution vendors like SAP, Siebel etc. and others.

Optimizing software solutions to match raw GHz processing power was easy but matching it with multiple cores, 2\4\8, seems a rather difficult task. As the software languages of today don’t support auto multi-processing. That means to say that a lot of manual coding effort is required to keep parallel processing optimized for parallel tasks. Then there is the insurmountable work of synchornizing the manual threads to avoid deadlock situations, thread starvation, keeping critical section safe and the like. This is the situation with all the modern dominant languages like Java and C#.

On the other hand, its time for another software language shift. Industry consultants predict that after every 10 years, the dominant software language gets replaced by some other language. Case in point is, Cobol, Fortran, C++ and now Java.

The most appropriate successor to Java seems to be Ruby<period>

I found one of *the* most comprehensive coverage of threading in Ruby and the latest trends/research.
But the problem I figured out is that there is no pattern of thought process developing or research heading somewhere concrete.
Although Ruby MVM is posed as the best option available but still issues with this approach are mentioned. Also any successful work on that is hard to find.

It seems threading in Ruby will remain experimental in nature and may only improve with the advent of some other language that has a solid implementation of the threading experiments done in Ruby side.

  • Calendar

    • October 2017
      M T W T F S S
      « Aug    
       1
      2345678
      9101112131415
      16171819202122
      23242526272829
      3031  
  • Search