Nearly every software vendor now makes claims about how they are cloud based, or cloud ready, or crowd friendly. Unfortunately, in doing so they are \n\nblurring the meaning of being a cloud application. Consequently, this article will apply differently to every cloud vendor. With that disclaimer behind us, one of the distinguishing characteristics of cloud software is the variety of ways it can be integrated. In Part 1 of this article, we discussed the four different layers at which cloud applications can be integrated. \n\nNow we look at the other dimension: the integration products and tools that are available for cloud and other Web service integration.\nCategory 0\nThe first category of integration products is no product at all: using the cloud application's own platform as the mechanism for integration. This means using \n\nthe native programming language available in the cloud vendor's business logic or database layer to call out directly to another cloud. It's a simple idea, but \n\ncan be tough to pull off \u2014 and in my experience it works well only in simple point-to-point integration use cases. Using this approach, you're likely \n\nto have to develop snippets of integration code at both ends of the cloud-conversation...and likely in different programming languages. The more \n\ndifferences your developers have to contend with \u2014 RESTful vs SOAP, XML dialect vs JSON, object tree vs DBMS table \u2014 the less likely \n\nthis is going to be a fun exercise. Even so, direct integration is a viable option for read only and other use cases. Some integrators swear by it. I only swear at it. If you're trying to do \n\nsomething truly transactional across clouds, dealing with message retries, guaranteed delivery, two-phase commit, and rollback logic can result in a \n\nprodigious pain in the pituitary. Did I mention dealing with system maintenance windows? \nCategory 1\nThe next category of integration products is the point-to-point connector. These come in two flavors: an on-premises version for use on your own servers, \n\nor integration services that reside in the cloud. Pervasive, Boomi (Dell), and other vendors use this approach to connect to a wide range of cloud and \n\non-premises applications at a very reasonable cost. Typically, these off the shelf, point-to-point connectors provide two-way synchronization of a limited number of fields. In some cases, the connectors \n\nhave no configuration options at all, but even the most flexible connectors allow changes only to field mapping. Adding custom fields, mapping to custom \n\nobjects, or having your internal code push data directly through their pipes is typically off the table. These connectors are sometimes all you need for linking, say, Salesforce.com to QuickBooks online. What they provide is really point-to-point data \n\nsynchronization, not the general case of integration. Trying to use connectors to link a third or fourth cloud is unlikely to work well, particularly if two \n\nconnectors need to access the same business object. When evaluating connectors, make sure to evaluate their documentation and support as there is truly horrific variability of this across vendors. \n\nDefinitely check their forums or other discussion areas to get a realistic appraisal on these issues.\nCategory 2\nThe next approach to integrating across clouds is to use a connector that isn't targeted to any particular application, but to an industry standard connection \n\nthat the other cloud can understand. The most obvious examples are ODBC and JDBC, but there are also language-level connectors that talk to standard \n\nlibraries. The benefit of this approach is the flexibility: your developers can get access to an arbitrary number of remote tables, objects, or methods. In addition, \n\nthese connectors typically aren't very expensive.But these are still point-to-point connectors, which means they're not appropriate for integrating several clouds together. Further, they don't provide \n\nmuch in the way of high-level services or object\/application context. They're pipes.\nCategory 3\nThe final approach is to use an integration server, which can itself be a cloud service (hosted at Amazon or elsewhere). Using an integration server still \n\nrequires the use of connectors at the end-points, but the connectors are essentially "dumb pipes" with all the intelligence at the hub. Because integration \n\nservers provide high level services (such as message brokering, translation, retry logic, logging, and administration) as well as their own programming (or at \n\nleast scripting) environments, developers have a lot more to work with. Context-sensitive routing, workflow, and compensating transactions can all be built \n\nout (and, more important, evolved) without disturbing any of the other cloud applications. An integration server \u2014 whether it's in the cloud or on premises \u2014 is the most flexible and adaptable approach. There's no architectural \n\nlimit to the number of systems you can integrate. Nearly any kind of system (including hardware controllers or other legacy systems) can be integrated. \n\nAnd in most cases, an integration server yields the best possible performance. The downside, of course, is that it's totally "build it yourself." While there \n\nwill be templates and code samples (and who knows, maybe some documentation), the whole point of an integration server is bespoke construction.Next week, we'll look at another dimension of integration: security.David Taber is the author of the new Prentice Hall book, "Salesforce.com Secrets of Success" and is the CEO of SalesLogistix, a certified Salesforce.com consultancy focused on business process improvement through use of CRM systems. \n\nSalesLogistix clients are in North America, Europe, Israel, and India, and David has over 25 years experience in high tech, including 10 years at the VP \n\nlevel or above.Follow everything from CIO.com on Twitter @CIOonline.