Before answering the question, my though process went something like this –
Well, cloud uses lots of commodity boxes; and failure is almost given. The application needs to take care of resource failures, meaning build fault tolerance. Besides, you may want to make use of additional cloud features (like if you are using AWS, then besides the EC2, S3, RDS, CloudFront, CloudWatch, Load Balancing, auto scaling etc) but if you are migrating an existing application then you need not do so in the first phase.
Besides this one reason, I could not fathom another good solid reason. Anything you will do additional in the application, that you would not had done when deploying to given hardware spec. I could think of many things as an application developer that you need to take care, but then you should had done this in any cases.
- Application State – Any application that needs to be scaled up, cannot store application state in the local machine. The state either needs to be stored in a centralized DB or file system.
- Resource Independence – When developing applications, the application should the agnostic to the resource location. The resources needs to be looked up at run time and should not be hard coded into application code. Common application programming guideline. Patterns like ResourceLocator are meant for the same. Application servers also provide mechanism to the similar effect.
So, have you faced or come across patterns that need to be handled specifically when developing applications that get deployed in cloud (private or public), do share them.