Archive for the ‘Cloud Computing’ Category

Public Cloud is Better than On-Premise and Netflix vs Zynga proves it

November 30, 2014

There is a common myth that for super large scale companies it makes sense to build their own data center instead of using a public cloud.

In many cases I believe this is exactly the opposite. For many companies, time to market, focus and top line are more critical than a theoretical saving 20% on the cost.

The Killing , Wikipedia

Consider Netflix and Zynga. Both companies are large and smart enough to build their own private clouds.

Zynga chose to leave AWS and build their own cloud infrastructure. Netflix chose to stay on AWS, probably with a huge discount.

Their stock price might hint on which company made the right choice.

Netflix Vs Zynga

Netflix Vs Zynga

Netflix focused the company’s energy on moving from a tech company into  a movie production studio with shows like “House of Cards” , “Arrested Development”,”The Killing” and “Orange is the new Black”. Zynga was busy in becoming a data center company. Instead of focusing on social games and preparing for the next big change into Mobile.

The more generic point is that the bottleneck in most companies is a person.  More specifically, it is management attention. If everyone is busy in building a private cloud and purchasing 1000’s of servers, no one has time to create a new business line.

The thought is that a private cloud becomes attractive with huge scale because the number of devops people to write software  has an upper bound.

This might be true, but there are very few people in the world who have already done it, and hiring takes a lot of time.

The other option is to hire inexperienced people, at least on this scale, and they would make mistakes.

Companies like Netflix and Zynga are supposed to have 70-90% gross margin. Reducing cost of hardware from 20% to 15% is nice, but even that is not straight forward. And in any case, it is much less important than losing or creating a new $1B on the revenue side.

 

 

 

 

Google Music – Quick Review

November 17, 2011
  • Sign up and activation was extremely easy few clicks away from Google Plus and it’s done (did not do credit card part yet )
  • I tried a hard Artist to start with “The Only Ones” a fun a little known sub-punk group
  • Google Music has only two albums compared to three albums in Rhapsody
  • The interface of Google Music is cleaner but Rhapsody has nice features like “Artist Radio” which plays an automatic mi

    Google Music Review - The Only Ones

    Google Music Review - The Only Ones

Rhapsody and Google Music The Only Ones

Rhapsody and Google Music The Only Ones

  • The streaming in Google Plus seems to be faster to start
  • Searching for Red Hot Chilly Peppers are actually produced better results for Google. RHCP is a hard band since they don;t allow most of their albums on steaming services.
  • With Pearl Jam Rhapsody is a great winner. Google Music only holds 13 albums and Rhapsody has over 40 ! Almost every recording they ever did, including PJ20, The Latest.
  • Pricing on Rhapsody seems much more attractive to me. for <$15 a month there is endless streaming. Could not locate this option in Google Music Yet. Buying Songs one by one is so iTunes. Possession is so 1990’s.

Cloud Computing in the Year 2000

October 8, 2011

I came across my old unborn thesis proposal from the year 2000.  The gist of the thesis was to evaluate the economical value of cloud computing 🙂 However, the proposal was rejected by my Professor as not “academical” enough.

This is sort of funny, considering that It was supposed to be in the Information Systems division  of the Management faculty.

Lucky for me. I went back to “Normal” MBA program and got the degree with just seven more courses.

Ancdotes for those who don’t read Hebrew:

  • Google had 4000 server farm 🙂
  • ASP Was the term for SAAS, AIP for IAAS . Not a big difference.
  • 8*Single CPU server configuration was 6 Times cheaper than Single Dell 8 CPU Server

אספקת פתרונות AIP בעזרת מסה של שרתים מבוססי מחשב אישי

בשנים האחרונות זוכה תחום הASP לפריחה ופיתוח רב. אף על פי שיש הטוענים כי התחום הינו OUTSOURCING בלבוש נאה יותר, ניראה כי ההתקדמות בתחום האינטרנט והתיקשורת רחבת הפס , כמו עלות העסקת כוח אדם טכנולוגי בתוך הארגון יוצרים לו הזדמנות גדולה. תחום חדש יחסית הינו ה AIP – Application Infastructure Provider. הAIP אמון על הכנה , תכנון,אחזקה אבטחה וניהול של תשתית חומרה ותוכנה לטובת הASP. ההתמחות של הAIP היא בניהול חוות מחשבים המריצות אפלקציות בזמניות גבוהה מאוד ואילו הASP אמור להתמחות באפליקציות שלהן הוא נותן שרות בלקוחותיו. קיימת גם תפיסה המרחיבה את תחום ההתעניניות של הAIP ליצירת מיסגרת של שיתוף פעולה בין הASP השונים שמשלמים עבור שירותיו. כך יכול ה-AIP לספק ללקוחותיו ערך מוסף מעבר לשירותי המיחשוב .

עבודה זו באה לבחון את הישימות הכלכלית של שימוש במספר רב של מחשבי PC זולים לעומת השימוש בשרתים מרובי מעבדים. עבודה זו תתמקד בסביבה של אירוח אפליקציות ( ASP ) ובמספר רב מאוד של מחשבים. כדי לבדוק האם יש היגיון כלכלי כלשהו בהצעה יתמקד מסמך זה בהשוואת עלויות של שרתי 8 מעבדים ל8 שרתים בעלי מעבד אחד.

מוטיבציה : הרעיון הבסיסי בעבודה זו מגיע משלושה גורמים. המחיר הנמוך להדהים של מחשבי PC בשנים האחרונות הוא הראשון מבינם. עקב המעבר ליצור המוני של מחשבים אלו והשיפור המתמיד בטכנולוגיה ירדה עלות מחשב PC מצוין ל1500$. בנוסף, עקב התפתחות מהפכת החונמה ( FREEWARE ) ניתן להשתמש במערכות הפעלה, שרתי קבצים, שרתי דואר ושרתי WEB במחירים אפסיים. הגורם השלישי הינו קיומם של אפליקציות קלות למיקבול ולחלוקת עומסים, והפופולריות הגדולה של שירותי אירוח אפליקציות. השילוב של 3 גורמים הללו יותר מצב שבו עבור סוגי אפליקציות מסוימות היתרונות היחסיים של שימוש בשרתים מרובי מעבדים נעלמים, או מצטמצמים מאוד ועקב עלותם הגבוהה נוצר יתרון כלכלי משמעותי לשימוש במספר רב של מחשבי PC כאלטרנטיבה.

חיזוק מוצלח לישימות אסטרטגיה זו מצאתי במאמר שהתפרסם על אסטרטגית החומרה של חברת GOOGLE . המאמר מתאר אסטרטגיה דומה למתואר בהצעה זו, המיושמת כבר היום בחברה על כ4000 מחשבים.

יתרונות לארכיטקטורות השונות  : לשרתים מרובי מעבדים יש מספר יתרונות בולט על פני שימוש במספר רב של מחשבים בעלי מעבד בודד. נושא זה הוא, כמובן, רחב ועמוק וכאן נתרכז רק בהיבטים הפשטניים שלו. במודל חישובי מסוג SMP קיים שרת אחד בעל BUS אחד וזיכרון משותף בעל מספר מעבדים הנע מ1 ל 64 מעבדים. לצורך פשטות לא נדון כאן בשרתים מסוג CLUSTER, CC-NUMA ו MPP שהינם פחות נפוצים כיום.

האלטרנטיבות :

לצורך השוואה ראשונית בחרתי לבדוק מחשבים מתוצרת חברת DELL[1]. חברה זו נבחרה משום שהיא מציעה תמחור מדויק וקל לכל המחשבים שלה ומשום שקיימות תוצאות אמינות של מחשביה במבחני ביצועים. בדיקה מדגמית של מחשבי חברת IBM העלתה תוצאות דומות.

היישום שבחרתי לצורך ההשוואה הינו אחסון שרתי WEB הניגשים בעיקר למידע סטטי[2].לצורך הפשטות ניתן להניח כי אנו מאחסנים אתר אחד בלבד, אך זוהי הנחה שקל יחסית לתקן ללא השפעה גדולה על התוצאות.

בקונפיגורציה הראשונה (א’ ) שרת DELL 8450 בעל 8 מעבדים ו16GB זיכרון. השרת יריץ SERVER WEB  אחד . מחיר שרת כזה הינו כ 120,000$.

בקונפיגורציה השניה נשתמש ב8 שרתי DELL 2400 עם מעבד 1 ו2GB זיכרון. כל שרת יריץ  WEB SERVER אחד . כל השרתים יחברו בעזרת  SWITCH לשרת חלוקת עומסים של חברת RADWARE שיחובר לאינטרנט. שרת חלוקת עומסים מתוצרת RADWARE הינו שרת חכם מבוסס חומרה ותוכנה היודע לחלק עומסים של בקשות HTTP למספר שרתים בצורה דינמית. לשרת זה תכונות מתוחכמות ביותר כולל יכולת חלוקת עומסים גיאגורפית והתחשבות בתנאים שונים כמו URL ו COOKIES. מחיר הכולל של קונפיגורציה זו הוא כ 103,000$.

מדד הביצועים שבחרתי הינו SPECWEB . מדד זה בודק ביצועים של שרתי HTTP ומאורגן ע”י ארגון הBENCHMARK הניטרלי SPEC.  לפי מדד זה ביצועי קונפיגורציה א’ הנם 3000 SEPCWEB וביצועי קונפיגורציה ב’ הנם כ6000 SEPCWEB.

יחס העלות תועלת המתקבל מהשוואה זו הינו 2.1 לטובת קונפיגורציה ב’.

אם נרצה להקל מעט על התנאים ונסתפק במחשב בעל 768MB זיכרון בקונפיגורציה ב’[3] הרי שיחס העלות התועלת הינו 4.4 לטובת קונפיגורציה ב’.

בקונפיגורציה ב’ ניתן להחליף את מחשב DELL במחשב  NAMELESS  . מחיר מחשב כזה יכול לרדת עד לכ2000$ ויחס עלות תועלת יהיה  6.5 לטובת קונפיגורציה ב’.

ככלל, ניראה כי ניתן להשיג חיסכון כלכלי של בין2.1 ל6.5 ע”י בחירה בקונפיגורציה ב’. ישנם שיפורים רבים שיש להכניס במודל ומקומות רבים שבהם יושמו הנחות מקלות  אך השתדלתי להפלות לרעה את דפ”א ב’ כדי לקבל חסמים תחתונים לשיפור.

מסקנה מעניינת שניתן לראות כבר מניתוח זה שכמות הזיכרון הנדרשת ליישום הינה גורם המשפיע בצורה חזקה ביותר על עלות התצורות ויחס העלות שלהם. הסיבה לגורם זה היא העלייה הלא ליניארית במחיר כאשר מגדילים את הזיכרון ובייחוד  בתחום שמעל1GB .

ניראה כי היתרון הגדול של קונפיגורציה ב’ יהיה ביישומים הצורכים כוח עיבוד רב וזיכרון מועט עד בינוני.


[1] השוואה זו מטה את התוצאות לרעת שיטת המחשב הפשוט שכן שיטה זו מבוססת על קנית מחשב NAMELESS שיהיה זול יותר .

[2] ניתן לאפשר שימוש בCGI וכדומה אך ההנחה היא שהמחשב לא נזקק למחשבים אחרים

[3] הזיכרון בשרת הבודד ירד בהתאמה

Are all the software products created in just five Countries?

August 19, 2011

It seems that software products are only created in very few countries around the world : United States, Israel , United Kingdom , Canada and Texas :).

There are 196 countries in the world, but most seem to have better things to do than to write software.

This is a critical piece of information because “Software is Eating the World.”

India has  plenty of IT projects outsourcing companies , Japan has 200 video games companies while China has many hardware companies.

However, there seem to be very few software products companies In non English speaking countries (I count both  Canada and Israel as English speaking countries for in this blog context).

Germany has SAP and Software AG. France used to have Business Objects, but now it belongs to SAP so it is left with Dassau. Japan has Trend Micro, but that’s about it. China is not in a much much better situation with total of 29 companies listed in Wikipedia.

Try to think about a famous Spanish Software Company ( Hint : Anti virus that looks like a bear).

I’m not sure why this is the case , but can suggest a few ideas:

  • There are a lots of software companies in other countries, but they are local to their markets an don’t bother to become international and big
  • Since programming languages are in English, there is a huge advantage to English speaking countries
  • Software development started form universities and the leading Computer Science universities are in the same countries
  • Software development product companies have consolidated to a very few big companies and most are American
  • Software products require a unique combination of strong engineering and “immature” first versions

Does anyone else have any explanation or counter data?

Any maybe it is less important these days. Our industry is moving into Software as a Service model in many use cases.

Fresh Look – Dozen Interesting Israeli Start-Ups

April 7, 2011
    Fresh Paint Balfour Street Tel Aviv

    Fresh Paint Balfour Street Tel Aviv

    I have assembled a pseudo-arbitrary list of interesting Israeli start-ups. These are mostly companies whose product I got to try and whose team I met. Some bias to companies with real intellectual property in algorithms or products. They may have much in common,and there are many more around, but worth watching.

  1. ToTango – Simple Idea. Wide Appeal. “New Wave” solution.
  2. Contendo – Speed of Light is constant. Akamai is Too Expensive. DNS too Crucial.
  3. Xtremio– SSD can be a game changer.
  4. ZeRTO – Smart guys. Track record. Stealth Mode.
  5. TakaDu – Smart guys. Strong Algorithms. Strong Need. Out of the box.
  6. Panaya – Strong Algorithms. Pure Israeli. Sales 2.0. Sharing knowledge. Proven Results.
  7. WatchDox. A Nobel approach to managing document and security.
  8. WorkLight – Portable Mobile Apps Make great sense. CEO.
  9. PrimeSense – Great Algorithms. Awesome product. Huge Potential.
  10. Plimus – Money has a wide appeal :). Great alignment for SaaS. Good API. Stands out in a confusing world.
  11. Kampyle – Simple product, wide appeal. Responsive to Customers. In the good sense.
  12. Snaptu – They were on the list before their exit 🙂 Same for Sentrigo
Orange and Carrot Juice in Tel Aviv

Orange and Carrot Juice in Tel Aviv

New Version Every Other Week – Part II

February 21, 2011

In part one I covered some principles that allow us to sustain a rate of a new version every two weeks.

In this post I’ll discuss some of the customer facing challenges and how to overcome them.

Enterprise software customers have grown to fear new product versions. Upgrades are as joyful as Freddy Krueger inside children dreams. One would expect that such customers would be very hesitant to change code every two weeks.

In reality , this is not a big issue,for the following reasons:

  • Industry standards – Nobody knows what “version” Google.com or CNN.COM or PayPal.Com  is running.  And frankly , nobody cares. It is a question of accountability, and if the service provider has accountability, the very basic notion is that upgrades are his problem to solve.
  • Trust – it the service performs well for a year, customers trust the update process.
  • Compatibility – obviously, external API’s have to be honored and  backward compatible. But there is really no reason to change them very often.
  • Visibility – since there is no explicit external “version number” ,the customer are much less intimated by changes.
  • Terminology helps. “Updates to the platforms” sounds much better than “Major new release”. But terminology can’t be the only solution. Vendors tried “HotFix”, “HotFix Accumulator”, “Release Candidate”, “Service Pack”, “Feature Pack” and “Early Availability” but customers still hate bugs. Branding alone is as impressive as re-naming the janitor as Chief Operations Officer.
  • Industry Standard 2 – even though SalesForce has only two release in a year, their SLA allows them to have four hours of downtime(upgrade) time every month.
  • Industry Standard 3 – Chrome updates itself without asking the user for permission. Windows updates , which used to be tightly controlled by IT, seems to be working very well for quite a few years.
leonardo dicaprio

leonardo dicaprio

  • Visibility 2 – concerned customers get a deep dive into the multiple safety mechanisms mentioned in the previous post.
  • Communication – as a SaaS provider we know what features are used and by whom. If we know we want to change a feature, we speak to these users before we commit the changes.
  • Isolation – we built an externally strong isolation model in which multiple features can run simultaneously , using only a single code base. This capability allows setting different “virtual release clocks” for every customer.
  • The benefits – in the end of the day, these releases are done to answer customers business needs, not to re-factor code. The customer get a lot of new functionality in an amazing pace, without paying for huge upgrades , software subscriptions or professional services fees.

To summarize, using a mixture of process, technology and adaptive product management turns frequent versions to Leonardo DiCaprio Rather than Freddy Krueger.  BTW, It is worth while reading how some companies deliver 10 versions per day.

New Version Every Other Week for Three Years?

February 16, 2011

I get up every morning determined to both change the world and have one hell of a good time. Sometimes this makes planning my day difficult.

E. B. White
US author & humorist (1899 – 1985)

Releasing a working version to customers every two weeks is fun.

  • It is fun for customers who use the features instead of watching fictitious  “product road maps”.
  • It is fun for developers who see their work is actually used.
  • It is fun for the executives who can change the business priorities quickly.
  • If is fun for product managers who can measure actual usage.
  • It is fun for the R&D manager ,as the problems can not be hidden for long.

In my company,  we delivered 72 versions to customers  in three years.

Programming in the large and programming in th...

Image via Wikipedia

Here is one way to do it:

  • Hire top talent for development , QA , IT and operations.
  • Deliver the product as a Service (SaaS). Upgrading one instance is much easier than upgrading 10,000.
  • Bi weekly synchronization meetings on Monday and Thursday. Monday is just team leaders and Thursday is all of R&D.
  • Invest early in QA automation. We invested $20,000 in Automation infrastructure at a very early stage.
  • Invest in Unit-Testing as much as possible.
  • Avoid branching. Branches are evil. Merges are Yikes. One branch is good, two is max.
  • Invest in the “Ugly stuff”. Deployment scripts, upgrade scripts, database consistency.
  • Constructive dictatorship. Every code change  has a ticket. Every. No exceptions.Really.
  • First week is for coding. Than it is feature freeze. Three days for QA and bug fixes.Code freeze. Two days for final QA and critical fixes only. Release on Sunday.

In the next post I’ll try to answer the tricky questions: What about longer features? How not to scare the customers? and more.

Commodity Clouds, IAAS and PAAS – Part II

February 2, 2011

In the first post we looked at some common mistakes resulting in premature “Commoditization” declarations.

In this post we would look at IAAS and PAAS in more detail.

In software, it is rare to have Nobel-prize-worthy-discoveries.  Still, it does not mean all inventions  are trivial. At the high level, analyst point of view, Windows XP, Vista and Windows 7 share the same technology. In the real world, there are many differences. In the real world, Vista was a complete failure although it was “a commodity operating system” and windows 7 was well accepted.

And these days we have people speaking about IAAS ( infrastructure as a service) as a “dying dinosaur” because PAAS (platform as a service) is the new king. They must be kidding.  Lets reconsider the facts.

  • Force.Com, the first  PAAS, is not working out. I don’t know of any major company that built their entire successful new company on top it. The licensing, performance and “Governor rules” caused it to fail. What works nicely inside salesfroce.com did not work well for the rest of the world. Maybe that why they bought Heroku.  Did any of their other acquisitions (DimDim\ManyMoon\Jigsaw\Etacts) run on Force.Com ?
  • VMFORCE.COM does not exist yet, as far as I can tell. It is just a press release , at this stage. When I read through the hype, there is no cloud portability at all, and it still looks like running JAVA on a single server with no scaling or multi-tenant capabilities. The home page seems quite stale.
  • AZURE is not much better off.  At its current stage Azure is more similar to COM+ than it is to .NET . Microsoft has invested so much marketing money on Azure that people think it actually has something that can compete with EC2. In the real world, Microsoft has no solution to run Virtual Machines in the cloud for public access. Their  PAAS solution can not run any of their applications – SharePoint,Exchange,Office, SQL Server Dynamics are all running on  internal IAAS solution, not on Azure PAAS. Wait 3-7 years for this to happen.
  • Did anyone hear of “Facebook” or “Twitter” using any PAAS platform ? Funny, but they are not keen to run their services on their biggest competitors platform. I wonder why.
  • Even Amazon EC2, who is by far the market leader and innovator , has long, long  road for to achieve the core feature set. Seriously. They added user management few weeks ago,only through the API, after four years in production. That’s probably the #1 feature any enterprise expects to find in any software service .
  • No one has really solved the problem of WAN based storage replication (despite bandwidth being “a commodity” 🙂 ).  This is critical for IAAS success in the enterprise.

The most expensive and longest effort is rewriting existing software. There were trillions of dollars spent in coding existing applications. Why would anyone rewrite the same business logic in a new platform, if they don’t need the scale?

VMWARE succeeded because it has great economic benefits without requiring a rewrite. PAAS solution is probably the right way to go in the long run, but might stay marginal for quite long time, IMO. IAAS has  a great start and would continue to evolve, but is far from being a commodity when looking beyond the hype.

Related Articles

[picapp align=”none” wrap=”false” link=”term=platform+shoes&iid=9588154″ src=”http://view1.picapp.com/pictures.photo/image/9588154/year-old-taylor-momsen/year-old-taylor-momsen.jpg?size=500&imageId=9588154&#8243; width=”234″ height=”351″ /]

Does SLA really mean anything?

January 31, 2011

I believe most SLA’s (Service Level Agreements) are meaningless.

In the world of Software as a Service and cloud computing it has become a very popular topic, but the reality is very different from theory.

In theory, every service provider promises 99.999% of availability which means less than 6 minutes per year.

In reality, even the best services (Amazon, Google, Rackspace) had events of 8 hours of availability problems which means they are at 99.9% availability, at best.

High Availability 99.999 Downtime Table

High Availability 99.999 Downtime Table from Wikipedia

Moreover , the economics just don’t make any sense. SLA’s can not replace insurance.

Imagine the following scenario.

E-commerce site “MyCatsAndSnakes.Com” builds its consumer site in “BestAvailabilityHosting” which uses networking equipment from “VeryExpensiveMonopoly, INC.

If MyCatsandSnakes is unavailable, the site owner “Rich Bastardy” loses $100,000 per hour of downtime.

Rich pays BAHosting $20,000 per month and they promise him %99.999 avilability.

BAHostig bought two core routers in high availability mode ,connected to three different ISP’s. Each router costs $50,000 and Platinum support is another %30 per year. So total cost is $130,000 for the first year.

One horrible day, the core routers have a software bug and the traffic to the MyCatsandSnakes is dead.

Since the routers have the same software the high availability does not help to resolve the issue and VeryExpensiveMonopoly top developers have to debug the problem on site. after 8 hours of brave efforts, cats and snakes are being sold online again.

Try to guess the answers to the following questions:

  • How much money did Rich lose? (Hint: $100,000*8 )

  • How much money would Rich get from BestAvailabilityHosting? ( Hint:  (8/(24*30))*$20,000 = $166 )
  • How much money would BAHosting get back from VeryExpensiveMonopoly? (Hint:$0)

The networking vendor,VeryExpensiveMonopoly, does not give any compensation for equipment failure. This is true for all hardware and software vendors.

They don’t even have SLA for resolution time. The best you can get with platinum support is “response time”, which is not a great help.

As a result , the hosting provider can not have back to back guarantee or insurance for failures in networking.

The hosting provider limits its liability to the amount of money it receives from Rich ($20,000 per month), which makes sense.

Moreover, the service provider would only compensate Pro Rata, so the sum becomes even more neglible.

But that does not help Rich at all, as his losses are far bigger. He lost $800,000 of cats and snakes deliveries to young teenagers across Ohio.

The real answer, IMO, is “Insurance”. If Rich really wants ro mitigate his risk, he can buy an insurance for such cases.

The insurance company should be able to asses the risk and apply the right statistical costs model . Asking a service provider to do it is useless.

SLA’s might be a good way to set mutual expectations, but they are certainly not a replacement for a good insurance policy or a DRP.

Here is an interesting review of CRM and SalesFore.Com (lack of ?) SLA . And here is Amazon’s SLA for EC2    and RackSpace.

Amazon: “If the Annual Uptime Percentage for a customer drops below 99.95% for the Service Year, that customer is eligible to receive a Service Credit equal to 10% of their bill”

GoGrid promises 10,000% but “No credit will exceed one hundred percent (100%) of Customer’s fees for the Service feature in question in the Customer’s then-current billing month”

RackSpace promises 100% avilability , but “Rackspace Guaranty: We will credit your account 5% of the monthly fee for each 30 minutes of network downtime, up to 100% of your monthly fee for the affected server.” 

Again, i don’t think one can blame these service providers, but the  gap from the perception seems major.

There are three real answers for customers who want an SLA from a service provider:

1) It would be better than on premise

2) How much are you willing to pay for extra availability?

 3) We have a great insurance agent 🙂



Commodity Clouds? You must be kidding

January 29, 2011

A commodity is a good for which there is demand, but which is supplied without qualitative differentiation across a market. Commodities are substances that come out of the earth and maintain roughly a universal price. Wikipedia

I find it hilarious when some people describe clouds or the IaaS market as a “commodity”, or even worse – “legacy”.

It is a common mistake that I see again and again by people who don’t have a clue in what they are talking about or just ignore the little details.

These are the little details you might call “reality”.

[picapp align=”none” wrap=”false” link=”term=oil+rig+sea&iid=8827400″ src=”http://view4.picapp.com/pictures.photo/image/8827400/file-photo-ocean-guardian/file-photo-ocean-guardian.jpg?size=500&imageId=8827400&#8243; width=”500″ height=”341″ /]

The first point I want to make is that “Commodity” is often misinterpreted as “Easy to Produce” or “Low Margin,Bad Business”.

Take a look at oil production. While the end product does not have qualitative differentiation,its production requires some of the most sophisticated technology available. Drilling oil from the bottom of the sea necessitates huge investments, great science and an amazing technology.

Moreover,  six of the ten biggest companies of the world are in the oil production sector, so maybe it is not such a bad business to be in.

Another example would be X86 chips. The X86 architecture is more-or-less the same as it was 30 years ago. It is available universally and there is no qualitative differentiation between different items. However, building a new FAB costs around $2B and Intel is one of the most successful companies on earth. No one would argue that there is no intellectual property in chip design.

[picapp align=”none” wrap=”false” link=”term=slow&iid=285366″ src=”http://view1.picapp.com/pictures.photo/image/285366/road-leading-the-ocean/road-leading-the-ocean.jpg?size=500&imageId=285366&#8243; width=”380″ height=”380″ /]

The second important point is that vision is nice, but reality is nicer. My friend  told me that in the late 90’s the technologists in Check Point thought that Intrusion Detection technology is an erroneous direction to follow. They thought that comparing signatures of attacks is reactive and it does not help the customer  to passively monitor the attacks.

[picapp align=”none” wrap=”false” link=”term=inception+stills&iid=9386959″ src=”http://view4.picapp.com/pictures.photo/image/9386959/stills-from-christopher/stills-from-christopher.jpg?size=500&imageId=9386959&#8243; width=”500″ height=”333″ /]

While they were right  in their long-term vision, ISS sold hundreds of millions in IDS software ,in the meantime. Moreover, when the market shifted to IPS ( Intrusion Prevention systems) , ISS had good solid technology to start from, which took Check Point  five more years to accomplish. As my father, the CFO, used to say, “The markets fix themselves in the long run, but in the long run we all die”. Technology adoption cycles are longer than they seem.

Some analysts are looking too far ahead. For example, two years ago everyone talked about hyper-visors as being commoditized. Microsoft and Citrix will give it it for free, KVN is for free anyway and VMWARE would have to follow. Surprisingly, in the last 12 months VMWARE sold more than $2B worth of , guess what, hypervisors.

Why are 200,000 customers being so silly and paying so much money when the analysts say differently?

For one reason, because Microsoft Hyper-V does not support NFS, yet, which is probably used by 40% of customers. Because Hyper-V can not handle memory over-commit, which means you’ll get about 30% less capacity from the same hardware. Because VMWARE Virtual Center is two generations ahead of Microsoft’s management server, and there is not much use for a hyper-visor that can’t be managed. See a nice post from 2008 about it.

So are the analysts the stupid ones?

Of course not. But they have not installed a hypervisor in the last five years. Furthermore , they are probably right in the long run. In three years from now (five years from 2008:) ) hypervisors might become a commodity. But it is much slower pace than it seems at first.

Remember how in 2000 Broadband Internet was just around the corner ? We’re in 2010 and only South Korea has upload and download speeds above 20Mbps . More on the commodity subject and especially in clouds in my  next post.