Archive for the ‘Security’ Category

Hardware, Software and (Virtual) Appliances Myths – Part Three

December 9, 2008

San francisco Virtual

In Part One I examined some myths about hardware and software appliances and showed appliances are mainly packaged software components.In  Part Two I described why hardware appliances became so successful in the last years and where.

In this part I’ll try to show how virtual appliances combine the best of both worlds.They combine the benefits of both software and hardware appliances with the extreme flexibility of virtualized computing.

Looking back to 2002, Check Point released SecurePlatform – an appliance on a CD, also known internally by the cool name “Black CD”. At the time, Check Point “real” hardware offering was not very successful and it relied on Nokia appliances to compete with Cisco and NetScreen appliances.

NetScreen appliances and appliances in general became more and more successful . Nokia produced excellent appliances as well, but they were typcalliy sold at a very high premium , chiefly for the brand.

SecurePlatform was invented  in order to offer the customers a cheaper option. SecurePaltform is a basically a bootable CD that one inserts into any x86 servers that formats the hard drive and installs a secure, shrunk down, Linux operating system with all of Check Point software products pre-installed.

The idea is to get most of the “real” appliance advantages (ease of install, drivers, secure OS, fast boot time,optimized performance) with the advantages of sofwatre ( flexibility, modularity, familiar shell and interfaces) at a very cheap hardware price (customer can choose his box and use x86 agreements and discounts).It also allows the customer to grow capcity easily without complex upgrades.

Overtime SecurePlatform became very successful and turned in to the customers’ favorite deployment choice. While in 2003 it still lacked a lot of appliance features ( image management, backup and recovery, web based interface), those were added along the years.

It is important to note that SecurePlatform based appliances, like other CD appliances,  still had some gaps from other appliances.

1. The form factor is still of a standard PC. With 1U servers becoming the norm it was less of an issue, but the number of network interfaces was still a problem in some cases.

2. Keeping up with driver computability with all the x86 vendors was very hard. When Dell\HP\Lenovo release a new firmware\driver they don’t bother to update anyone and back porting Linux based device drivers is not fun at all. The implications are that the appliance is not as generic as would seem.

3. There is no single point of support for hardware+software.

4. There is no “real” hardware acceleration, if it is really needed.

To overcome some of these, in 2005, Check Point started selling hardware appliances, based on SecurePlatform as another alternative.

Virtual Appliances are the next generation in the same concept.

Because the hypervisor presents a standard “hardware” API to the operating system, most of the compatibility issues are solved by the hypervisor manufacturers. Because the appliance is packed as a standard virtual machines, there is no need for the reboot\format\install procedure.

Ducati Motorcycle

Ducati Motorcycle

Of course, since the appliane is a virtual machine the customer enjoys great flexibility, not found in regular appliances or even “CD Appliances”

  • High Availability and load balancing across physical server (e.g Vmotion)
  • Full control over memory and CPU allocation in real time
  • Easy provisioning , tracking and backup which are appliance independent
  • Consolidating many appliances to one physical server while maintaining modular design and software independence
  • The appliance can be used “inside” hypervisors, so there is no need to move traffic from the bus to the network
  • Form factor and port density are less of an issue , since the switches and routers are virtual as well

To make the creation of virtual appliances easier, companies like Rpath, are providing an easy to use software to handle a lot of the work Check Point, NetScreen and other vendors and to redo to create their own appliances.

Some problems still remain open, mainly the lack of standard central management to control appliances from different vendors. I’m guessing one start-up or another is working on the problem.Hardware acceleration is lacking, but it would be probably be solved by future developments in the core virtualization companies.And no one needs hardware acceleration anyway 🙂

To summarize, it seems that virtual appliances turn software into the king again.They combine software advantages and overcome its shortcomings.

In a cloud based world, there is a good chance it will become the favorite deployment vehicle.

Virtual Clouds – How Gartner’s Top 10 Strategic Technologies for 2009 Consolidate

October 20, 2008

Gartner just published in their blog the  Top 10 Strategic Technologies for 2009 .

The list is  Virtualization, Business Intelligence, Cloud Computing, Green IT, Unified Communications, Social Software and Social Networking, Web Oriented Architecture,  Enterprise Mashups,  Specialized Systems, Servers – Beyond Blade.

The interesting point , In my opinion, is that many of these technologies are actually supporting each other, making the trend even stronger. I’ll describe why this is so and than use my company to give a subjective example.

I believe virtualization, cloud computing, Web Oriented Architecture and enterprise mashups have a  a great synergy.

Virtualization (#1) key strength is in abstraction. It removes the coupling of hardware and software.

Cloud computing (#3) takes the abstraction to the next level. Now, no hardware is needed at all.

The problem with most clouds is that they do not allow reuse of existing enterprise applications.However, Virtual Clouds can run any application from the data center , but do it on on the internet, on demand. Basically, if you have a cloud of VMWARE or Hyper-V servers you could move application between cloud and Enterprise data center on demand.

To make it more interesting, the simple fact that clouds are on the net (#7) makes them the Ideal to create enterprise mashup (#8).

With the right security and networking in place it is possible to to create hybrid enterprise applications which have one leg in the cloud and one leg in the virtual cloud.

In IT Structures we have built a virtual cloud to support the business application of virtual sales. Our service offers collaboration environment ( #6) for sales engineers and ISV  to run proof of concepts for enterprise applications in the cloud. We are using virtual private networking (VPN) technology to connect clouds and private data centers.

Their are Clouds in the Horizon - Good Ones

There are Clouds in the Horizon - Good Ones

The cool thing is that because of virtualization it is much easier to replicate, provision and allocate resource in a multi-tenant environment while keeping the environments separated. Building a service , rather than a product uses economies of scale to  reuse resources during dead hours.

The cloud location over the web means that Proof Of Concepts can be accessed by vendors, IT, executives and contractors as opposed to the traditional closed garden approach. The on demand nature lets the POC start in five minutes, which is a win-win for both the vendors and the enterprises.

Creating a virtual cloud is not trivial, the security, storgae, performance,networking and elasticity are really really hard to obtain.But once it is done, it can offer many  revolutionary new services. To wrap up,  Gartner is right on target this time. The only thing they got wrong is that they published Just three technologies this year 🙂

Can vCloud do This ?

September 16, 2008

VMWARE announced a new cloud based VMWARE solution also quoted at http://elasticvapor.com/2008/09/vmwares-vcloud-announcement.html . There are still quite a few details on the actual offering. Judging from the past, VMWARE products are impressive and well thought of, but the proof is in the v-pudding.

Virtual Machines Under The Clouds, sort Of

Here are some questions that are waiting for an answer :

  • Is the product really multi-tenant ready? while virtual center has a good separation of permissions and  reasonable resource control, it lacks true customer segmentation when failures take place.
  • Is the system scalable? While VMWARE offering is quite scalable within the Enterprise , cloud based offering requires a different magnitude of capacity, load and change frequency which are quite hard to achieve with a legacy architecture.
  • How much operational work is needed ? a true SaaS solution requires that very little operational work is needed on regular basis, on one hand, and that administrators have lots of control on the other hand. This implies a new level of automation and self healing,unlike the one in a typical data center controlled environment.
  • Can it handle complex networking and storage setups ? Typical enterprise environments are rarely made of a single server in a simple LAN attached to a local file server. While they can be done with VMWARE, setting them up in scalable, self service manner is far from trivial and it remains to be seen if vCloud helps here.
  • How does VDC-OS handle security and wide area networking ? It is critical for an Internet based service to handle Internet based access control and overcoming performance challenges that exist over WAN.

While it is possible to solve these challenges, it takes the right mind set and a lot of expertise.Once more information is available, it would be possible to understand the exact capabilities.

Spec Master: The Hidden Product Manager Role , Part II

June 13, 2008

One week later Alex returned to our weekly meeting.

  • Setting the routing table takes 0.2 seconds
  • Setting the virtual network adapter takes 5 seconds
  • Networking with the server and authenticating take 3-5 seconds
  • The time to set the virtual network adapter is actually a degradation from two years ago. We changed to a new adapter, which is easier to maintain, but increased the connect time by 3 seconds. I think I can we can use the old adapter and reduce it back to two secondsIt is very hard to change the networking time since it is part of the protocol and needed to set up the VPN tunnel.

    Ophir:  Great, but the overall result would still be around 8 seconds

    However, we can do a nice trick. We will keep the VPN tunnel always open, even when the client disconnects. Instead of terminating the connection we will reset the routing table (which is fast

     As a result the first connect in a day would be 3-5 seconds but all the other connections would be just 0.2 seconds, which is almost unnoticeable

    Ophir: Amazing. What’s the catch?

    There would be extra memory utilization in the server.

    Ophir: I think the memory issue is no brainer. For most customers the number of tunnels is no so big and it does not seem will add more than 10MB , even in the worst case. Excellent work.

    Going back to my initial hypothesis, I believe the chronicle has great lessons.

    When goals are measurable, they can be analyzed and improved. Setting high goals allows developers to innovate. When the product manager is ready for a dialog, she will find out new tradeoffs that she could not come up without the R&D feedback. Relative goals are dangerous, because the starting point is unknown.

    In our case, a few hours or research allowed to reduce the connect time by 50% in all cases and by 95% in 90% of the cases. We also set a new key parameter for developers and QA to test and validate. The effect would be noticeable by the end users and greatly improve their satisfaction. 

    Models Site Hacked – Reported Here First :)

    May 30, 2008

    As reported in this blog by our reader A.J the top model agency site had an SQL Injection.

    Turns out that it was actually used by slow hackers and made the front headlines  http://www.globes.co.il/news/article.aspx?did=1000347082&fid=594.

     

    What you paid for is offically not supported

    February 25, 2008

    Some surprising facts I learnt during last months.

    Shipping companies don’t have to ship.Server companies don’t have to give alerts.Security companies products don’t have to work with other security companies products.

    1. If your shipping company looses a 10,000$ server they were supposed to ship, you’ll not get back your money. It has something to do with the Friblich treaty for worldwide shipping. Doesn’t it make you happy ? 
    One would assume they at least notify the customer beforehand. We would have done the insurance.Yep, this is the same treaty that lets airlines loose luggage on a regular basis

    2.  IBM sells high-end servers that do not report when they are overcooked.

    • Politically correct:  “The expected Simple Network Management Protocol (SNMP) CPU and ambient temperature alerts through IBM Director, similar to other systems, do not occur.”
    • Politically Incorrect : “Your X3650 10,000$ server is on fire. We are sorry. This is a known limitation”.

    3. “Symantec Endpoint Protection 11 MR1 client/manager communication does not function if Checkpoint VPN client software is installed on the client. ”

    We actually got our money back for the 3rd one. We’re almost at 2010, is it too much to ask QA to check it ?

    Product Management in the Real World -“The Divider” Case Study, Part II

    February 22, 2008

    There are scratches all around the coin slot
    Like a heartbeat, baby trying to wake up,
    But this machine can only swallow money.
    You cant lay a patch by computer design.
    Its just a lot of stupid, stupid signs.
     

    Two weeks later Ron sent an initial version to QA. The testers vigorously began opening bugs with hilarious titles: “Nothing Works!”, “GUI Crashes Every 46 Seconds” ,”Spelling Mistakes in Non Existent Help Screens”. The coders raged about QA’s inability to overcome transitory hiccups, and silently ran to fix the problems.  

    An improved version with most of new features was deployed to QA after a two month delay. Surprisingly, it turned out the old, “existing” features the divider was supposed to expose are hardly working. Since they had no user interface the testers “forgot” to check them.  It seemed customers were not using the protections either.

    The default setting for the innovative protections was set to off, as it raised too many false alarms.  Since there was no visible way to turn them on, only the most advanced and innovative, paranoid customers implemented the protections.

    To make things worse, no support tickets were open as well, and the CMO was convinced the product is top notch.

    Sigourney shouted at Ron:”How can you provide a product that’s not working? Did you ever test it yourself before deploying to QA? “Ron, who wasn’t the quiet type, responded: “I own the SQL Guardian” that works smoothly. The “Data Crusher” was written by the company founder five years ago and you can talk to him about it. I did not join this company to be a code monkey. You are throwing undefined tasks at me, stealing Mark for other projects and then wonder why things break.  I will not stand this hypocrisy”.

    team-problem.jpg

    Rumors of the problems reached Oberon, the director. He moved three additional developers to help the project. Although Ron felt the project is running out of control, bugs were fixed at a much higher rate. The director ran a daily status meeting to monitor the development and reprioritize trivial bugs. He kept the team confident :”Microsoft ships with many bugs and they still rule the world”,” In a 1.0 version  customers are forgiving for minor problems”.

    The marketing department published a passionate release note regarding the innovative new concept TASP security will present in. The stock rose and the sales team was energized. The entire R&D helped and people worked around the clock. Following three months of intense work, a Go-No-Go dissuasion was held with QA, R&D and product management.

    QA felt the product is not mature enough, but the rest of the team ignored them. There wasn’t a single product they ever approved, not even the successful “Knowledge keeper” .The exhausted Sigourney felt the product is ready and people got tired of the repeating delays. Five months later than the original plan, the pressure was mounting to go ahead and release.  Ron was the only opposition, and refused to be responsible for the results. Oberon considered all the options and decided to ship. To comfort Ron all the limitations will be listed in a ten page long release notes paper.

    Stay tuned for Part III – The Customers.

    Product Management in the Real World -“The Divider” Case Study, Part I

    February 16, 2008

    A fictional story that never happened and probably never will. 

    the-divider.jpg

    TASP Security Software was in a serious trouble. Its flagship product for database security, the “Knowledge Keeper”, was replicated by vicious competitors. Due to historical innovation, the company kept its leading role in the market, but the competition gained a growing market share, and all the products were deemed as equal.

    The cruel analysts, having no technological understanding, were spreading rumors that “Knowledge Keeper” market is commoditized, an euphemism implying any kid can implement it, and that the price is going to dive soon.

    Arnold, the product manager, quickly diagnosed the problem and announced a new concept that will highlight the unique capabilities of “Knowledge Keeper”. The concept will be branded as “The Divider” and will bring to light the “Knowledge Keeper” technological supremacy.  Since the quarterly financial reports were coming along he ordered the developers to “get it done” in four months.

    Sigourney, the group manager, was furious:  “Such a product cannot be shipped in four months. We are in the midst of infrastructure projects that we’ll solve the global warming problem! How am I supposed to create a new product with no headcount? – it contradicts the law of energy conservation”.

    The developers joined the fury: “How can we code “The Divider” with no definition of its capabilities ? We cannot develop a product based on a vague, fuzzy management concept”  

    Still, Sigourney approached the task with faith and agility. Having no programmers available, she assigned Ron to the job. Ron was recruited as chief internal security officer, to educate the employees to guard internal information and develop new security guidelines. When he was recruited, he declared he is tired of programming and he wants to focus on research.  However, Sigourney remembered that Ron is a Wizard coder from the Amiga assembly days. The rumor was that he made the juggling balls, in the Amiga famous demonstration, disappear into the juggler’s mouth.

    Mark, a developer from a different group, was added to help Ron in the task. He did not report to Ron, but the task importance was clearly explained to him and his manager.

    Since the time was short, Sigourney decided to focus on five existing product features that were never shown in the user interface. The features were hidden, and it was only possible to activate them by manual changes in obscure INI files. It was also decided to develop a new “SQL Guardian” to validate all the SQL instructions sent to the database are indeed legal.

    Sigourney and Ron passionately started working on the task. Ron demanded a requirement document from Arnold, the product manager. Since Arnold was busy handling existing customers’ escalations, everyone agreed the development team would create a mock-up of the UI and Arnold would provide feedback on it. Since most features existed for many years, they decided a detailed design for “The Divider” is not needed.

    Work progressed quickly. The team realized the importance of the project, but was somewhat frustrated with the minimal resources allocation. Oberon, the director, reassured them: “We are in an initial phase, if the product succeeds, additional people will be added. Right now, you just need to add few dialogs and text to features we had for the last three versions”.

    After six weeks, problems began to raise their ugly head. The mockup was progressing slowly. The GUI developer, coming from another group, was not sure what exactly he is supposed to do. His attempts to get clarifications form Ron got a very slow response, as Ron was busy coding “SQL Guardian” which was the most interesting part of the project. Coming from information security background, he made certain that all the smallest vulnerabilities are blocked, even for DB2 and CA-Ingres. Trying to create the perfect SQL parser resulted in a major setback in the project.

    To save the day, Arnold presales tasks were moved to the support department. Arnold worked directly with the UI developer to define the dialogs. The mockup was presented to key customers and sales executives and received great feedback.

    Product Manager doing Support

    Three months along the development, the QA department started warning:”If we don’t get a stable version of the product, there is no way we can complete the testing on time for shipment”.

    While Ron worked on the new features, Mark was supposed to integrate the old ones in the new UI. Due to urgent problem in his other group projects, his progress on “The Divider” was quite slow. Although he enjoyed developing new code instead of fixing old bugs, written by the company founders, it was hard to get rid of the obnoxious customer tickets.

    Sigourney called for an emergency discussion. “We have to give something to QA. Even if it is not perfect, they can start playing with the product and open bugs.  We’ll inform them on the current limitations and they can work around them”

    Ron responded “We didn’t code the GUI-engine communication layer yet!” Sigourney shouted at him: “They can configure it with INI files as far as I’m concerned, by the end of the week we are delivering a version to QA”.

    Privacy, Security and Elastic Computing

    February 1, 2008

    There is an interesting contest going on in SmugMug image sharing site,  you can get 600$ if you can find a security hole in their system.

    This is the result of an an interesting debate if security and privacy are separated and how privacy and probability are related.

    The core of the issue is that images that are marked “private” are actually public URL’s which can be easily enumerated. While SmugMug offers stronger mechanism for access control, I do believe this one creates a false sense of security.

    SmuMug is a great site and it seems the people who make it are really innovative and smart. However, in the end, the question is how much would it cost to break it, assuming there is one evil person who wants to abuse it.

    The surprising answer is 2535$.

     I’ll demonstrate by assuming there is one evil person in the world who hates SmugMug for being so cool and successful.

    This person decides to spend his hard earned money to create a publicity nightmare.
    Lets assume there are 1 Million real picture out of the 250 Million possible URL’s (the actucal number does not really matter).

    He spends 500$ (100*0.01$/HR*5000HR) to get 100 servers from Amazon EC2 and use them for 2.08 days. Each server can send 50,000 HTTP requests per hour.
    After 2 days the evil person knows exactly the links to the one million “private” pictures ( 50*50,000*100 = 250,000,000 ).

    He needs to pay 10$ for bandwidth for the pictures ( 1M * 0.1MB * 0.0001$/MB).
    The non existing links would cost 25$ ( 250,000,000 *0.0001$/MB *0.001
    MB).

    Total cost is 535$ to get all the pictures.
    BTW, since SmugMug is using amazon’s S3, bandwidth cost would probably be 0$ since bandwidth between S3 and EC2 is free )

    In order to find the interesting ones he uses Amazon Mechanical Turk. He pays 0.01$ for 5 images classification (a HIT) so the total cost would be 2000$ (1M * 0.01$/ 5).

    Now the evil hacker can post top 1000 photos in Flicker and get his evil wish fulfilled (2535$ cost)

    To make matters worse, a cheap evil person can accomplish the same task with a zero cost, using JavaScript & open web sites. This is very early in the morning, so I might have missed some of the calculations, but the order of magnitude seems fine.

    So, I suggest SmugMug keep doing the great work they are doing, but also invest the time and effort to fix this issue.

    The fact no one has complained so far, is merely because the attack didn’t take place so far. Security through obscurity does not work in the long run.

    It is a shame that one evil person can cause so much work and harm to so many good people, but that’s life.