Posts Tagged ‘Virtual Cloud’

Lady Ga-Ga or: How I Learned to Stop Worrying and Love the Facebook

January 30, 2010

The western world ended quite suddenly.

The news, and pictures, about Lady Ga-Ga actually being a man, were first reported by Steve Jobs as he presented Apple’s new iPlot gadget at a secret location.

127 journalists immediately tweeted the story , and it was soon re-tweeted by 13,068 followers.

[picapp align=”none” wrap=”false” link=”term=lady+ga+ga&iid=6140998″ src=”1/7/b/5/Lady_GaGa_performs_c55e.JPG?adImageId=9661170&imageId=6140998″ width=”500″ height=”612″ /]

The tweets were automatically converted 1675,042  LinkedIn notification which turned into automatic 300,000 WordPress Updates.

Than Google picked the news up and sent alerts to 1,020,068 Lady Ga-Ga followers and 1,002,900,3 day traders.

However, the big problem started as the new automatic “Google Alert” to “FaceBook comments” mechanism kicked in.

Since Facebook comments are automatically generting Tweeter alerts ,a vicious positive feedback cycle was created.

Tweeter->LinkedIn->WordPress->Google->Facebook->Tweeter.

Soon, 95% of the computing power of the western world was targeted at breaking the (false) news to the same people again and again.

When New York  lost its electric power, due to the high consumption by data center. Google decided to cancel Google wave and create a super algorithm to solve the problem.

They took five of their Nobel prize winners, who have been working on JavaScript optimizations, and asked them to solve the problem.

Google Geniuses quickly realized the problem is similar to solving the “ipartite graphs with no induced cycle of length > 6”  problem, but just when they were ready to solve it, the network on their Android t-Mobile crashed. The only person to hear about Amazon’s EC2 explosion  was President Obama, with his secure Blackberry.

As San Francisco,Tel Aviv, Rome and London lost all electric power the mob started rioting the food supplies. Unfortunately they starved after two days because all of the food was organic.

Luckily , China was saved, as Google decided to block them, or vice versa.

Advertisements

Bar Refaeli, DNA Sequencing and Cloud Computing

December 7, 2009

Much like Bar Refaeli and Leonardo DiCaprio, DNA Sequencing and cloud computing go hand in hand together.

[picapp align=”none” wrap=”false” link=”term=Bar+Refaeli&iid=3965233″ src=”5/e/7/5/PicImg_Sports_Illustrated_Swimsuit_a842.jpg?adImageId=8071751&imageId=3965233″ width=”390″ height=”594″ /]

I had a  very  interesting conversation with a friend yesterday about DNA Sequencing and cloud computing.

My friend is leading one of the largest cancer genome research projects in the world (and  yes, he is extremely  bright).

It appears that there is a great progress in DNA sequencing technology, based on chemical process. The pace is much faster than Moore’s law. As a result the budgets are shifting from the chemistry side to the computational side.

In the past, the budget would be 90% for biology and 10% for analyzing the data coming our of the DNA.

As the sequencing costs have fallen by orders of magnitude there is more and more data ( a single patient genome data is one TeraByte).

The more data , the more computing power needed to analyze it and hence the budget split becomes 50-50.

Each computation can take up to 24 hours, running on 100 cores mini grid.

[picapp align=”none” wrap=”false” link=”term=DNA&iid=7062711″ src=”c/a/5/d/SCIGENOME_737a.JPG?adImageId=8071402&imageId=7062711″ width=”500″ height=”332″ /]

In theory, such tasks are great for cloud computing IAAS (Infra Structure as a Service) platforms or even PAAS (Platform as a service) solutions with Map-Redux capabilities.This EC2 Bioinformatics post provide interesting examples.

In practice there are three main challenges

  1. Since Cancer research facilities need this server power everyday, it is cheaper for them to build the solutions internally.
  1. To make things even more challenging, the highest cost in most clouds is the bandwidth in and out of the cloud. It would cost $150 to store one patient data on Amazon S3, but $170-$100 to transfer it into S3.
  1. Even if the cost gap can  be mitigated, there can be regulatory problems with privacy of patients data.After all its one person entire DNA we speak about. Encryption would probably be too expensive, but spiting and randomizing the data can probably solve this hurdle.

So, where do clouds make most sense for this kind of biological research ?

One use case is the testing of new improved  algorithm. Then, the researchers want to run the algorithm on all the existing data, not just the new one.

They need to compare the results  of the new algorithm with the old algorithms on same data set.They also need to finish the paper on time for the submission deadline :).

In such scenarios there is a huge burst of computation,needed on static data, at a very short period of time.Moreover,  if the data can be stored on shared cloud, and used by researchers form across the world, than data transport would not be so expensive in the overall calculation.

These ideas are fascinating and hopefully would drive new solutions, cures and treatments for cancer.

[picapp align=”none” wrap=”false” link=”term=genome&iid=96824″ src=”0093/03895531-6d57-46bd-a1ad-def577b31174.jpg?adImageId=8078279&imageId=96824″ width=”500″ height=”333″ /]

Horses and Monkeys

April 16, 2009

We had some internal debate if the following pictures would work well as marketing campaign. I came up with the concept and our graphic artist created it.

I was giving them out as postcards next to the booth in VMWARE Partner exchange.

Some initial conclusions:
1. Animal pictures do attract attention.
2. Monkeys are better than horses.
3. People like humor.

4. If you hand out something to people, they stop and look at it.

5. If people start thinking you have time get their attention.

free the sales monkey vmware partner exchange

free the sales monkey vmware partner exchange

j

unleash sales horses vmware partner exchange 2009

unleash sales horses vmware partner exchange 2009

vcloud booth IT structures VMWARE partner exchange

vcloud booth IT Structures VMWARE partner exchange

VMWORLD 2009 – Cannes and Paris

February 27, 2009

Some pictures form IT Structures booth at the show and views of Paris, preparing for next conference.

Starting with a cheap trick to get the audience attention.

Nice Car of the Future in Paris, not like my Peugeot 206

Nice Car of the Future in Paris, not like my Peugeot 206

girl picture in paris winter

girl picture in paris winter

Moving to the important keynote we attended

paul mauritz and zvi guterman on vmworld 2009 vcloud keynote

Paul Mauritz and Zvi guterman on vmworld 2009 vcloud keynote

IT Structures Vmworld vCloud Keynote The Big Screen

IT Structures Vmworld vCloud Keynote The Big Screen

it structures vmworld vmware 2009 vcloud pavilion

it structures vmworld vmware 2009 vcloud pavilion

it structures vmworld vmware 2009 vcloud pavilion booth ophir and zvi

it structures vmworld vmware 2009 vcloud pavilion booth ophir and zvi

The unique requirements of cloud-based enterprise applications

February 9, 2009

We just published on IT Structures web site the white paper I was working on.

Not So Virtual Cloud, But Virtually Nice

Not So Virtual Cloud, But Virtually Nice

Here is the abstract and the full paper can be found here. If you want to get to the technical part, jump to the requirements section below.

  • The unique requirements of cloud-based on-demand multi-tenant applications
  • Limitations of existing building blocks in virtualization and enterprise software technologies
  • Introducing an intelligent technology layer to provide automation of environment setup & provisioning, elasticity, resource allocation and scalability

The Challenge: Virtual Labs for Sales and Training

The days of “blind” purchasing of enterprise software and hardware solutions based on vendor promises alone are a thing of the past.

Customers have universally adopted a “try before you buy” approach, demanding not only a generic evaluation of the solution prior to purchase, but also a proof-of-concept (POC) implementation using their own data, integrated with their own applications and in their own environment. Equally, customers
want to invest the minimum effort in such POCs, whose setup is often more time- and resource-consuming than the actual evaluation process.

Vendors consequently find themselves providing POCs and pilot projects with a significant increase in cost of sales and a lengthened sales cycle: tying up hardware inventory, wasting sales engineers’ time at customer premises and inflating travel costs. The same often applies to post-sales training, where the vendor must provide staff for training and the cost is borne by either the vendor or the buyer, or both.

Thankfully, the convergence of virtualization and cloud computing is making POCs, interactive demos and postsale training easier and more accessible, at least in theory.

Since any network environment, server or application can run as a VM, and since cloud infrastructure can run such VMs (as well as real hardware) on demand as a service, it is logical that the two can be combined to deliver scalable, multi-tenant, on-demand provisioning and management of virtualized POCs, demos and training. Such a solution would deliver “virtual engagement” of customers during pre- and post-sales stages and reduce the expensive, lengthy real-world sales processes.

Unfortunately, although the base infrastructure and building-block components are available, assembling them to deliver virtual sales engagement and training is not at all straightforward. This is where IT Structures steps in.

This white paper explains the complex requirements for on-demand virtual engagement delivered as a cloud based service, and how IT Structures developed its ground-breaking orchestration technology in order provide it in a scalable, flexible model.

The Requirements

Cloud-based solutions must fulfill at least all the requirements expected from traditional data center management tools, software-as–a-service solutions and modern virtualization environments.

The core requirements are:
1. Complexity and Realism – The ability to build and run any enterprise application or appliance in a multi server
environment, with a complex networking topology that can be connected to the internet and to on premise
data centers.

2. Instant Gratification – Trying out a new environment should be fast and easy. As a result, the performance of the system must be excellent and it must not require any dedicated client installation. In an elastic production environment it is critical to have a frictionless solution because of the extremely frequent changes.

3. Multi-Tenant and Tiered – the system must support multiple software vendors working at the same time;
it must allow multiple enterprise customers to work at the same time on an identical but separate copy of the environment. The system must ensure the complete privacy and security for each user. The service must ensure that failures are confined to a specific environment and do not propagate across the system.

4. Replication – The system must be able to replicate a template of an IT environment and create hundreds of new customized running instances on the fly. This is critical for production, training and demo solutions and is at the core of the cloud concept.

5. Internet Enabled – All functionality must be available over the internet. The service must allow secure access to environments over the web on the one hand, and simulate private networks on the other hand. All instances should run concurrently and be accessible in the cloud.

6.Self Service – The service is geared towards both non-technical as well as technical users. It must abstract complex, composite IT operations into simple, web-based, single-click business operations.

7. Availability – The service must be able to recover from failures automatically, maintain exceptional uptime and provide self-healing and recovery functionality across all its components. Even when certain tasks fail, the service should optimize its resources to provide the highest service levels to the maximal number of
customers.

To read the way we achieve the implementation you can get the full paper or just send me an email.

Uncranking Sales Engineers Training

February 6, 2009

My experience shows that it is quite hard win over European sales engineers during a sales kick off.

While the American ones tend to be enthusiastic and join the vision-future-roadmap, the European are usually skeptical, technical and knowledgeable.

The two interesting posts made me think that working with the actual product in the sales kick off works well for all parties. The SE’s gain trust in the product. the product manager gets real feedback from tens of experts in real time.

Sales engineers  don’t have to stare at five  hours long  boring presentations and the product manager does not  have to get them approved with the CMO.

Shameless plug starts here – While it is hard for most companies to get 50 labs up and running for the two days of sales kick off, those who are using IT Structures have actually done it recently and in multiple continents, with no hardware needed, based on our elastic virtual cloud.

Maybe this can make everyone less cranky 🙂

Hardware, Software and (Virtual) Appliances Myths – Part Three

December 9, 2008

San francisco Virtual

In Part One I examined some myths about hardware and software appliances and showed appliances are mainly packaged software components.In  Part Two I described why hardware appliances became so successful in the last years and where.

In this part I’ll try to show how virtual appliances combine the best of both worlds.They combine the benefits of both software and hardware appliances with the extreme flexibility of virtualized computing.

Looking back to 2002, Check Point released SecurePlatform – an appliance on a CD, also known internally by the cool name “Black CD”. At the time, Check Point “real” hardware offering was not very successful and it relied on Nokia appliances to compete with Cisco and NetScreen appliances.

NetScreen appliances and appliances in general became more and more successful . Nokia produced excellent appliances as well, but they were typcalliy sold at a very high premium , chiefly for the brand.

SecurePlatform was invented  in order to offer the customers a cheaper option. SecurePaltform is a basically a bootable CD that one inserts into any x86 servers that formats the hard drive and installs a secure, shrunk down, Linux operating system with all of Check Point software products pre-installed.

The idea is to get most of the “real” appliance advantages (ease of install, drivers, secure OS, fast boot time,optimized performance) with the advantages of sofwatre ( flexibility, modularity, familiar shell and interfaces) at a very cheap hardware price (customer can choose his box and use x86 agreements and discounts).It also allows the customer to grow capcity easily without complex upgrades.

Overtime SecurePlatform became very successful and turned in to the customers’ favorite deployment choice. While in 2003 it still lacked a lot of appliance features ( image management, backup and recovery, web based interface), those were added along the years.

It is important to note that SecurePlatform based appliances, like other CD appliances,  still had some gaps from other appliances.

1. The form factor is still of a standard PC. With 1U servers becoming the norm it was less of an issue, but the number of network interfaces was still a problem in some cases.

2. Keeping up with driver computability with all the x86 vendors was very hard. When Dell\HP\Lenovo release a new firmware\driver they don’t bother to update anyone and back porting Linux based device drivers is not fun at all. The implications are that the appliance is not as generic as would seem.

3. There is no single point of support for hardware+software.

4. There is no “real” hardware acceleration, if it is really needed.

To overcome some of these, in 2005, Check Point started selling hardware appliances, based on SecurePlatform as another alternative.

Virtual Appliances are the next generation in the same concept.

Because the hypervisor presents a standard “hardware” API to the operating system, most of the compatibility issues are solved by the hypervisor manufacturers. Because the appliance is packed as a standard virtual machines, there is no need for the reboot\format\install procedure.

Ducati Motorcycle

Ducati Motorcycle

Of course, since the appliane is a virtual machine the customer enjoys great flexibility, not found in regular appliances or even “CD Appliances”

  • High Availability and load balancing across physical server (e.g Vmotion)
  • Full control over memory and CPU allocation in real time
  • Easy provisioning , tracking and backup which are appliance independent
  • Consolidating many appliances to one physical server while maintaining modular design and software independence
  • The appliance can be used “inside” hypervisors, so there is no need to move traffic from the bus to the network
  • Form factor and port density are less of an issue , since the switches and routers are virtual as well

To make the creation of virtual appliances easier, companies like Rpath, are providing an easy to use software to handle a lot of the work Check Point, NetScreen and other vendors and to redo to create their own appliances.

Some problems still remain open, mainly the lack of standard central management to control appliances from different vendors. I’m guessing one start-up or another is working on the problem.Hardware acceleration is lacking, but it would be probably be solved by future developments in the core virtualization companies.And no one needs hardware acceleration anyway 🙂

To summarize, it seems that virtual appliances turn software into the king again.They combine software advantages and overcome its shortcomings.

In a cloud based world, there is a good chance it will become the favorite deployment vehicle.

Virtual Clouds – How Gartner’s Top 10 Strategic Technologies for 2009 Consolidate

October 20, 2008

Gartner just published in their blog the  Top 10 Strategic Technologies for 2009 .

The list is  Virtualization, Business Intelligence, Cloud Computing, Green IT, Unified Communications, Social Software and Social Networking, Web Oriented Architecture,  Enterprise Mashups,  Specialized Systems, Servers – Beyond Blade.

The interesting point , In my opinion, is that many of these technologies are actually supporting each other, making the trend even stronger. I’ll describe why this is so and than use my company to give a subjective example.

I believe virtualization, cloud computing, Web Oriented Architecture and enterprise mashups have a  a great synergy.

Virtualization (#1) key strength is in abstraction. It removes the coupling of hardware and software.

Cloud computing (#3) takes the abstraction to the next level. Now, no hardware is needed at all.

The problem with most clouds is that they do not allow reuse of existing enterprise applications.However, Virtual Clouds can run any application from the data center , but do it on on the internet, on demand. Basically, if you have a cloud of VMWARE or Hyper-V servers you could move application between cloud and Enterprise data center on demand.

To make it more interesting, the simple fact that clouds are on the net (#7) makes them the Ideal to create enterprise mashup (#8).

With the right security and networking in place it is possible to to create hybrid enterprise applications which have one leg in the cloud and one leg in the virtual cloud.

In IT Structures we have built a virtual cloud to support the business application of virtual sales. Our service offers collaboration environment ( #6) for sales engineers and ISV  to run proof of concepts for enterprise applications in the cloud. We are using virtual private networking (VPN) technology to connect clouds and private data centers.

Their are Clouds in the Horizon - Good Ones

There are Clouds in the Horizon - Good Ones

The cool thing is that because of virtualization it is much easier to replicate, provision and allocate resource in a multi-tenant environment while keeping the environments separated. Building a service , rather than a product uses economies of scale to  reuse resources during dead hours.

The cloud location over the web means that Proof Of Concepts can be accessed by vendors, IT, executives and contractors as opposed to the traditional closed garden approach. The on demand nature lets the POC start in five minutes, which is a win-win for both the vendors and the enterprises.

Creating a virtual cloud is not trivial, the security, storgae, performance,networking and elasticity are really really hard to obtain.But once it is done, it can offer many  revolutionary new services. To wrap up,  Gartner is right on target this time. The only thing they got wrong is that they published Just three technologies this year 🙂

Hardware, Software and (Virtual) Appliances Myths – Part Two

October 17, 2008

In Part One I examined some myths about hardware and software appliance. Today I’ll try to describe why hardware appliances became so successful in last years and where.

The basics ideas come from a great NetApp pitch I heard in 1994, when they were very small.Their example at the time was “Routing was done by generic Sun\IBM\HP\Digitital Computers and Cisco turned it into Appliance”. The analogy was “File Serving is done by generic Sun servers and NetApp is going to be the Filer Appliance”, which they did.

Appliances can be great because:

  • Appliances can be cheaper than PC – creating a 60$ Small office router is just not possible using  PC hardware components. Even $1000 enterprise branch office is better of using cheap CPU and low memory to achieve a great margin.
  • Appliances are much easier to install – this is probably still true. Having someone else tie together all the software , do the hardening, remove extra bits and having no drivers to deal with is a great win. Installing the right RAID driver for a generic Linux system can still be quite challenging.
  • Appliances can have better performance for dedicated tasks– NetApp favorite example was trying to list 2000 files in a big directory .It could take several minutes in a generic Unix file system. Since NetApp designed the operating system  just for file serving it was done amazingly fast.
  • Appliances can have a much better form factor – It is quite hard to put 12 Network cards in a single PC.To populate it with 40 is just impossible. Moreover, the network cards on x86 servers are in the wrong side ! Network equipment makers place the cards in the front , while generic servers have them in the back. Again, it seems like a small thing, but try to get Dell,HP or IBM to change that for your appliance.
  • The right side of the cable

    The right side of the cable
  • Appliances are not managed by the server group – one of the biggest selling points for network departments is that the server group can not touch dedicated operating systems. If a Firewall admin buys a Linux server she has to conform to the Linux guidance and dictatorship of the server admins.If it PimiPimiOS , they have no say about it.
  • Appliances are more secure – this is true to some extent just because the functionality is limited and no extra services are installed. However, in many cases it may boil down to security by obscurity. Nobody bothers to update their appliances with latest security patches and the proprietary operating system are not inspected by the community. Furthermore, security applications can not be run on these unique environments.
  • Appliances boot faster – seems like a small thing, but waiting  ten minutes for windows to load is not really acceptable for an enterprise grade router or file server. It is also quite annoying in your home DSL modem. Actually it is quite annoying on my $2000  ThinkPad. Anyway, having a very small ,optimized OS and no hard disk allows a very fast boot time, along with dedicated thinking about boot and reboot length.
  • Appliances are more reliable because they have no hard disk (“moving parts”) –  maybe , not so sure about this one. Anyway , in few years no server will have any moving part ( although it seems fans are moving all the time … )
  • Appliance have  a superior , dedicated management console – this is commonly true. Good appliances have a a great unified web and command line management that bundles all management aspects from image management to application configuration. The problem is once you have 30 different appliances from different vendors  each with its own dedicated ChuChuOs. On a side note, it tends to be quite hard to script and program these beasts , for the same reason.

To make the discussion more interactive till i post the third piece here is a small poll to get your feedback.

Cloud Computing To Save the Economy ?

October 17, 2008

Douglas Gourlay from Cisco makes a good point on cloud computing and economical depression .

Similar in some aspects to my previous  post Can you make money writing algorithms ? Part II.

The basic concept is quite accurate, cloud computing is best suited where capital is sparse and change is frequent. He asks what will it make to turn current hosting to cloud computing one. My guess would it takes a very strong software to turn them in that direction. The type of software a service provider would not be able to write, but really needs a very strong start-up or an ISV to develop. It is much more complicated than just integration of open source and commercial software ( trust me, I do support in my spare time : ).

I’m actually busy these days writing a detailed technical white paper on what it takes to create a virtual machines cloud to run real enterprise applications. I’ll update when its out and ready. Hints : Scale, Frequency of Change, Networking, And highly flexible optimization system that bridges business, technical and product gaps and can be changed instantly.Stay tuned.