The ''cloud'' is certainly a buzz-word these days.
Consider that a QC/ALM installation is comprised as 3 major components:
1. Server Application
2. Repository (project-related files)
3. Database
Most installations are setup to have the Server Application and Repository on the SAME server (in this case, there is NO latency related to accessing the files).
Sometimes, due to disk space concerns, companies will locate the repository on a NAS or network shared location. This is OK as long as there is little or no latency as QC/ALM addresses the files there.
Also, most installations have the Database back-end co-located on a server that is relatively ''close'' in networking proximity (i..e in the same building or a fast network connection within the same company's network). This way, the DB communications have little or no latency/lag.
Now, consider what a ''cloud'' is in simple terms:
Some server or servers are maintained by some other entity or in some cases a different division within the same company. These servers could be located practically anywhere in the world, so network communication speed between the various QC/ALM components is key here.
If communication between the components (such as speed of R/W files, or making transactions to DB) is too slow, your QC/ALM end users will suffer greatly. If it is too slow, the application will fail completely.
Another thing to consider would be how much control you would have over the server-side maintenance (cycling QC services, upgrading, patching, etc.). Do you want to do these activities or not? How much delay is there when requesting such tasks from the new Cloud maintaining entity? What about getting server logs? - remember, they are stored on the QC Application Server.