Recently, there was a major fire outage reported. The building maintenance team informed the fire brigade department. The fire brigade came in, helped douse the fire and tried to salvage whatever they could. The owner of the building demanded an independent assessment of the reasons behind the fire, what triggered it and how to make sure this does not happen again. The building maintenance team called in an independent assessment team and asked them to comb the carnage and find out what might have caused this!

The independent assessment team went through the motions of accumulating the evidence, looking for telltale signs of what might have gone wrong and then eventually did an event reconstruction on how events would have unfolded on that day. Subsequently, the team came out with the recommendations that need to be implemented to make sure that such an outage does not happen again.

How many times, we have seen that one of the easiest way of cleaning up your mail box is to delete all the enterprise announcements and group e-mails? Are we getting bombarded with too many policy change mails, group emails and announcements? After a few days of the email bombardment, are our brain gets conditioned to just ignore them ?

All this leads to a point where we start becoming clueless about
  • Happenings in the Enterprise
  • Policy/Process Changes that might be affecting us
which means we might be working harder and not smarter.
Successful enterprise are all about business agility and able to introduce new products and services in the market. All this business agility coupled with reduced IT overheads means, the enterprise need to find better ways to improve and transform their enterprise systems.

The advent of Cloud, Social, Mobile and Consumerization of IT means enterprises applications need to be adapt to the changing environment. Today, every enterprise is looking to fulfill the following demands
  • Adopt cloud for their enterprise applications (whether private / public / hybrid is a matter of enterprise priorities) 
  • Replace or retire in-house enterprise applications ( where applicable) in favor of the equivalent SaaS applications 
  • Expose enterprise data for third party consumption
  • Make the enterprise functionality available over variety of channels (web, mobile) - Consumerization 
  • Make the systems available 24X7 to meet the ever growing business demands 
The Enterprise application patterns for creating applications – Portals, CMS, SOA, Centralized DB’s are not functionally capable or geared to meet the new business demands. The enterprise needs to adopt the newer application patterns that are coming out of the consumer web world. Some of the patterns emerging out of new generation consumer web applications that can be applied for enterprise applications
Time and again, we witness, when a program goes into acceptance testing phase, the client and teams suddenly realize that the application is not meeting the Non-Functional requirements. Usually the application is very slow, or it is frequently going down or not scaling up as expected. I am not even talking of requirements mismatch here.
The advent of the Social Collaboration, Online Selling, Digital Goods, Mobile means every enterprise wants to process the transactional and analytical data, that is being collected at multiple customer touch points. All this data need to be processed so that the enterprise can better understand the customer, his social network, his buying patterns and other things.

This has led to ever increasing amount of data, which is leading to the following issues within the enterprise

For an Architect, as if deciphering the requirements between Functional and Non-Functional requirements was not enough, DevOps and Technical Debt has opened another head for bunching requirements.

For the uninitiated, DevOps is a movement that is meant to break down the silo approach between the Dev, QA and IT teams, where the code moved in a batch mode from Dev->QA->IT Operations teams. This silo'ed approach meant delay, disruption and duplication of efforts, which ultimately leads to delays in pushing out business functionality.
An organisation embarking on a cloud journey invariably ends up looking for PaaS solutions. PaaS sounds like something that will help me take away all my pains. Creating your application, using drag and drop controls in a browser and everything else taken care (from deployment to running to scalability to back up of data and what not) sounds something exciting and very hard to believe.

Anyway, who wants to deal with the IT folks and deal with all the tantrums they throw, what is possible and what is not.
Capacity Planning is all about managing you resources better. Resources are finite, resources need to be procured, resources come at a cost, resources get consumed, as a result you need to do some capacity planning.

Capacity planning is an exercise undertaken in all the industries. There are plenty of models on how to perform capacity planning. But somehow application of these models in the software industry is too cumbersome, tedious and at times completely useless. These models work best when you have a standardized product and process. In software, every release changes the dynamics of the software product. The code base changes, the code performance changes, the usage pattern might change leading to the failure of the previous capacity planning exercises. released a one day promotion- Lady Gaga album for 99 cents and the site went down. Lots of customers could not buy the offered album songs. Leaving lot of customers angry and loosing potential business

With the proliferation of devices in various screen sizes and running different OS, having a coherent mobile web strategy has become somewhat of a nightmare for the enterprise. Gone are the days, when the enterprise would just optimize the existing web site for mobile and let the content be served. Today’s consumer with the ever powerful device and increasing bandwidth is looking for experience that is equal to or at times better than the web.

When trying to create an mobile strategy, enterprise need to ponder over this
  • Should I build a native application for the mobile device
  • Should I build a mobile optimized web site

Continuing the coverage on Hadoop component, we will go through the MapReduce component. MapReduce is a concept that has been programming model of LISP. But before we jump into MapReduce, lets start with an example to understand how MapReduce works.

Given a couple of sentences, write a program that counts the number of words.

Whenever a newbie wants to start learning the Hadoop, the number of elements in a Hadoop stack are mind bogling and at times difficult to comprehend. I am trying to de-crypt the whole stack and help explain the basic pieces in my own way. Before we start talking about the Hadoop Stack, let us take a step back and try to understand what led to the origins to the Hadoop.

Problem – With the prolification of the internet, the amount of data stored growing up. Lets take an example of a search engine (like Google), that needs to index the large of amount of data that is being generated. The search engine crawls and indexes the data. The index data is stored and retrieval from a single storage device. As the data generated grows, the search index data will keep on increasing.

Performance is one word which is used to describe multiple scenarios’ when talking about the application performance. When someone says, I need a High Performance Application; it might mean any/all of the following
  • Low Web latency Application ( meaning low Page Loading times)
  • Application that can serve ever increasing number of users (scalability)
  • Application that does not go down (either highly available or continuously available)
For each of the above, as an Architect you need to dig deeper to find out what the user is asking for. With the advent of cloud, every CIO is looking to build applications that meet all of the above scenarios. With the advent of elastic compute, one tends to think that by throwing hardware to the application, we may be able to achieve all of the above objectives.

When designing and maintaining large scale software architecture, certain rules need to be adhered religiously. In the absence of which the software starts degrading, maintenance becomes difficult, adding features becomes a nightmare and soon everything comes crashing down as a house of cards.
  • Tiering/Layering your logical architecture – a clear logical separation between different layers lead to simplified code navigation and comprehension. The layers should also follow a strict and clear naming convention for all packages and types. One can make of tools to make sure the code follows the logical architecture and all dependencies are in order.
  • Cyclical dependency – Adhere to the principle of well defined and cycle free application. Cyclic dependencies can soon lead to bloating of code. Even package should be validated for any cyclical dependencies
  • NCCD (Normalized cumulative component dependency) is another factor that needs to be adhered. NCCD of compilation units must not be bigger than 7. If this value grows over the threshold, one should isolate layers and subsystem by only letting them have interfaces as entry points. Breaking cyclic dependencies can also shrink this metric considerably.

In today’s world, whenever you are facing a problem, the first impulse is to open Google and see if other people had already faced the similar problem and how they did they resolved the same. The good part is you are most likely to find the solution to the problem also. Does that mean, Google is the new Knowledge Manager for your enterprise?

Why do the employees tend to search on Google/Bing for solutions and not look at the KM systems?

Healthcare Sector in India is poised to grow to $280 Billion USD by 2020. The investment required in the next 10 years is to the tune of $86 Billion USD, majority of which will be contributed by the private sector. All this investment will need require lot of people that need to be integrated in this sector and IT backbone will be needed to be able to service the ever-growing Indian population.

Recently, I was asked this simple question; do I need to do anything differently when developing applications that will get hosted in cloud?

Before answering the question, my though process went something like this –

Well, cloud uses lots of commodity boxes; and failure is almost given. The application needs to take care of resource failures, meaning build fault tolerance. Besides, you may want to make use of additional cloud features (like if you are using AWS, then besides the EC2, S3, RDS, CloudFront, CloudWatch, Load Balancing, auto scaling etc) but if you are migrating an existing application then you need not do so in the first phase.

The whole premise behind the Enterprise Portals was to consolidate and bring together the various silo’ webs sites in an organization. The Enterprise Portal provides tools to the same effect like User Management, Credential Vault, Personalization and Collaboration to name a few. But in recent times, the Enterprise Portal is being used outside of organization (Intranets) to create corporate/transactional sites for your end customers. The rigors of Internet sites demanded are very different from Intranet sites and at times, Enterprise Portal are not able to meet their demands. We will some of the functionality expected from Web2.0 internet sites that need to be factored when making that all important decision of using an Enterprise Portal Software.

On Tuesday 22nd January 2011 Packt Publishing released four brand new IBM books on a range of different subject matters:
·                     IBM Lotus Quickr 8.5 for Domino Administration
·                     Getting Started with IBM FileNet P8 Content Manager
·                     IBM WebSphere Application Server v7.0 Security
·                     IBM Rational Team Concert 2 Essentials

Cloud computing has become a kind of buzz word. The availability of cheap computing power allows organizations to start with out a heavy capital cost in the infrastructure. But the fundamental change it has bought is terms of the business models of the new organization starting up.
  • Massive Computing - Gaming companies require massive computing and generate lot of data.Cheap computing power means, companies starting can scale overnight when the demand starts going up. Firms like Zynga ( remember Farmville) are products of this cheap computing power. Companies moving to Facebook encountered massive traffic overnight. It was the availability of cheap computing power from vendors like AWS that allowed them to scale overnight. And today, the valuations these facebook app companies are commanding is mind boggling.

Social Media presence has become the de-facto standard for all the companies. Companies trying to create FB applications have multiple choices available to create and integrate applications in FB. Facebook provides a rich set of API's to allow developer's to integrate FB functionality.

Recently, I came across from embarcadro RadPHP XE that allows you to create PHP based application including Facebook apps in a easy and intuitive style. One need not break head on the FB developer site to understand the API's, as the RadPHP XE provides facebook libraries that wrap the whole functionality.

Check out the tutorials