You can cache data in JSR 168 portlets in order to avoid unnecessary backend requests. First, you see how to leverage the IBM WebSphere Application Server dynacache infrastructure to store cached data. Next, you see how to generate cache keys for data which is shared across all components in the Web application and which has a session scope. Then, you look at a second cache key generation technique to address the need for caching data that is private to a portlet window. An example bookmark portlet illustrates both caching techniques.
Check out Caching data in JSR 168 portlets with WebSphere Portal V5.1

Filed in:
Forrester reported that Web mail usage among North American online households is changing: Compared with last year, users access Web mail more regularly and are 33% more likely to say that Google, MSN Hotmail, or Yahoo! provides the email address they use most frequently. Web mail providers are upgrading the experience too, making Web mail look and feel more like a snappy desktop application than the slow, refresh-laden experience of several years ago. But these innovations aren't aimed at customer acquisition — instead, portals will exploit a better Web mail experience to entice users to spend more time on search and other services that drive revenue.

Learn how to get major security benefits by developing code that lets your J2EE™ applications transparently send identity information to your database. Get the benefits of J2EE, including CMP beans, and still leverage the power of your database security.
Performance considerations for custom portal code
— Stumbled on this while trying to performance tune the portal application. A very useful article specially if you are using custom portal code


Filed in:
IBM's New WebSphere Process Server Gets Its First Business Rule Software
— ILOG today announced it is the first Business Rule Management Systems (BRMS) software vendor to offer integration with the new IBM WebSphere Process Server version 6.0, announced yesterday by IBM.


New to SOA and Web Services
— If you are new to SOA and WebServices, then this article is a very place to start with. The article talks about What is SOA? What can do you do with SOA? What are the different component technologies in SOA? How can you build a SOA system ?

How to access user information in JSR portlets
Several API calls are available in WebSphere Portal Server to access user information from JSR portlets. This technote explains which API should be used for which purpose. It also provides troubleshooting information for some common problems with the APIs.

Filed in:
IBM's Steve Mills "We're Not Feeling Boxed In By Oracle"
— IBM is embracing SOA...in a bear hug. In a teleconference today fronted by Steve Mills, Senior Vice President and Group Executive, IBM Software Group and the General Manager of WebSphere, Robert LeBlanc, it was manifestly apparent that the company - which considers itself the world's leading provider of technologies that support Business Integration and Business Process Integration - is not looking over its shoulder, not at Oracle nor anyone else. And IBM Global Services turns out to be the secret ingredient of Blue's software portfolio 'realignment.'

Figure 1 WebSphere Application Server Dynamic Cache overview


The Dynamic Cache is part of the IBM solution for improving performance of Java 2 Platform, Enterprise Edition (J2EE) applications running within WebSphere Application Server. It supports caching of Java servlets, JavaServer Pages (JSP), WebSphere command objects, Web services objects, and Java objects.

Figure 1 presents an overview of the Dynamic Cache engine. The Dynamic Cache stores its content in a memory-based Java object store. These objects can be accessed and manipulated by APIs that are provided with the service. The WebSphere Application Server also offers a Web application called Cache Monitor, which plugs into the cache to provide a view of its contents and statistics.
The Dynamic Cache uses a replacement policy like the least recently used (LRU) algorithm to create space for incoming entries when the assigned space is full by deleting contents based on the replacement policy. It can also be con- figured to push data to a disk cache from which it can reclaim the data if needed in the future. Removal of entries from a cache can also occur due to data invalidations, based on cache policies defined by the administrator.
Dynamic caching requires cache policies to be con-figured for an application, or for the application to explicitly use the available caching APIs. The Dynamic Cache (in the cachespec.xml file) stores a caching policy for each cacheable object. This policy defines a set of rules specifying when and how to cache an object (i.e., based on certain parameters and arguments), and how to set up dependency relationships for individual or group removal of entries in the cache.

Web Page/Fragment Caching

WAS Dynamic Cache provides caching for static and dynamic content of servlet and JavaServer Pages (JSP) pages and fragments making up Web pages. You cache servlets and JSPs in WAS declaratively and configure the caching by using a cache policy defined as an XML deployment descriptor included in your Web application. The cache policy file — named Cachespec.XML — is located in your Web module's WEB-INF directory along with the standard Web.XML file. When configuring the cache policy for Dynamic Cache, you must consider which servlets or JSPs to cache, what makes an invocation of that servlet or JSP unique, and when to remove content from the cache.

Identifying Servlets or JSPs to Cache

WAS Dynamic Cache parses the Cachespec.XML deployment descriptor on application startup and extracts from each . . . element a set of configuration parameters. Then, every time a new servlet or JSP is initialized (e.g., when the servlet is first accessed), the cache attempts to match that servlet/JSP to each cache-entry element to find the configuration information for that servlet.
Whenever a servlet of the specified class is initialized, WAS Dynamic Cache will match that servlet with the configuration for this element. The tag identifies the type of object being specified in the tag. For servlets and JSPs, the tag always has a value of servlet. Other types of cacheable entities (e.g., commands) use different values for the tag.
You can also identify a servlet or JSP using its Web URL path. For example, the following specification defines a cache policy for a servlet with a mapping in Web.XML of /action/view:
<cache-entry>
<name>/action/view</name>
<class>servlet</class>
</ cache-entry>
The specification of the Web path is relative to the Web application's context root; therefore, the context root isn't included in the servlet's name.

Identifying What Makes a Cache Entry Unique

Most servlets and JSPs make use of a variety of inputs to generate unique dynamic content (otherwise, static HTML is used). This variable input usually comes in the form of request parameters, browser cookies, request headers, session attributes, path information, and request attributes from parent or peer servlets. To correctly cache a servlet or JSP, the cache policy author must identify the minimum set of input variables that makes a servlet's or JSP's output unique. Dynamic Cache supports declaring these inputs through the use of <cache-id> and <component> tags.


<cache-entry>
<name>/viewQuote </name>
<class>servlet </class>


<cache-id>
<component id="ticker" type="parameter" />
<component id="style" type="session" />
</cache-id>
</cache-entry>


Figure 2: Displaying the View Quote servlet's dependencies

For example, Figure 2 shows that the View Quote servlet's output is dependent on a request parameter named "ticker" and on a style attribute that was previously stored in the user's HTTP session. When a request is made to the URL http:/myhost/quoteapp/viewQuote?ticker=IBM by a user with an HTTP session attribute of style=frames, the following cache ID is generated:
/quoteapp/viewQuote:ticker=IBM:style=frames
Each request to the View Quote servlet with unique ticker and style attribute values will produce a unique cache entry instance. WAS supports many different input variable types that you can use to uniquely identify requests. Figure 3 (below) displays a list of valid types supported for use in generating cache IDs.

Removing Entries from the Cache

After populating the cache with servlet and JSP content, the next important consideration is how to remove the content. Dynamic Cache removes content based on an explicit timeout, an explicit invalidation based on cache ID or dependency ID, and replacement due to being selected for eviction by the cache's replacement algorithm. Dynamic Cache manages the replacement algorithm, which uses a cache entry's priority and frequency of access to determine which entry to evict when the cache has exceeded its capacity. The cache policy in Figure 4 sets each cache entry to have a timeout value of five minutes (300 seconds).
There are three ways to explicitly remove entries from the cache: programmatically by cache ID, programmatically by dependency ID, and declaratively by dependency ID.

Enabling and Analyzing Runtime Dynamic Caching

After you've created Dynamic Cache policies to cache and control appropriate page fragments in a cachespec.xml, next step as the WAS administrator is to enable servlet caching for the application. The WAS Web container configuration in the Administration Console provides a check box property for this. You'll find specific details for enabling caching by searching "Dynamic Cache" in the WAS Info Center at http://publib.boulder.ibm.com/infocenter/wasinfo/index.jsp.
Once you've enabled the Dynamic Cache, your next step is to monitor and optimize application caching at runtime. WAS 5.0 provides two methods for runtime analysis: the Dynamic Cache runtime monitor application and the Tivoli Performance Viewer (formerly the WAS Resource Analyzer). Both methods detail runtime cache behavior by showing cached servlets, JSPs, and objects, as well as cache invalidations, Least Recently Used (LRU) evictions, and other key runtime data.
The Dynamic Cache monitor is an installable Web application provided as part of WAS 5.0. To install the cache monitor application, follow these steps:
  1. Install the cachemonitor.ear application from the was_home>/installableApps directory.
  2. Access the cache monitor via a Web browser using the URL http://:/cachemonitor. For example, http://localhost:9080/cachemonitor would work for a WAS instance installed on the localhost node using the default HTTP port 9080.

Optimizing Dynamic Cache Runtime

You optimize caching performance by managing the WAS Dynamic Cache cached entries and memory usage. The number of distinct cached entries in a Web-based application can quickly grow. Each cache entry instance takes up memory in the WAS instance JVM. The amount of memory used varies based on the size of the object in the cache. A large JSP page or deeply nested Java object can require significant memory for each cache entry. You must consider memory constraints when setting the cache entry size. Setting it too large can result in system paging (drastically reducing performance) or even the dreaded Java out of memory error.
Memory limitations often restrict the Dynamic Cache from keeping all cache entries active, so you must take the following actions to keep performance high:
  • Adjust cache entry size for optimal cache size and memory requirements. The Dynamic Cache removes entries when the number of cached objects exceeds the WAS setting for cache entries. The default cache entry size in WAS 5.0 is 1,000 entries. If the number of cached objects exceeds the cache entry size, objects are evicted and must be re-created when they're accessed again, which reduces the performance efficiency of caching. The cache monitor application details the runtime statistics for cache entry size and number of active entries. If the number of active entries frequently reaches the cache entry size, you should increase the size setting on the Dynamic Cache configuration page to reduce overflow evictions.
  • Set cache entry priorities to "LRU out." When all cacheable objects can't be cached due to memory constraints, the Dynamic Cache offers a "priority" mechanism to better control which objects to evict. The cache uses an LRU algorithm when selecting candidates for eviction. This ensures that those cache entries hit often will remain in the cache. You can further tune this behavior by increasing the priority of cache entries that are expensive to compute. Priorities range from 1 to 10 and determine a cache entry's relative importance. For example, in the above scenario, you could raise the priority for user account cache entries. This would lead to stock quotes that are rarely accessed and less expensive to compute being evicted from the cache first. You set a cache entry priority in the cachespec.xml file, as Figure 3 shows.


<cache-entry>
<name>/viewQuote</name>
<class>servlet</class>
<cache-id>
<component id="ticker" type="parameter" />
<component id="style" type="session" />
<priority>3</priority>


</cache-id>
</cache-entry>


Figure 3: Setting cache entry priority

  • Monitor for entries that have excessive invalidations and stop caching them. The Dynamic Cache also removes cached entries based on explicit timeouts and invalidations as defined by the caching policies in the application's cachespec.xml. Invalidating cache entries, such as objects in the Dynamic Cache, is expensive and can reduce performance. You can monitor for cache invalidations using the Tivoli Performance Viewer. You should disable caching for dynamic cache entries that have a low cache-hit ratio and/or high invalidation rate.

Distributed Caching

The WAS 5.0 Dynamic Cache can improve performance even more by providing caching in a cluster. A cluster is a cooperating group of WAS servers that are all running the same application code. WebSphere can provide dynamic replication to share cache data across a WAS cluster. You can replicate both application cache entries and invalidations.
There are three primary modes of operation for data replication:
  • None — Data is not replicated in the cluster.
  • Push — Data is immediately pushed to other cluster members as it's updated on one node.
  • Push Pull — the cache ID is immediately pushed to other cluster members so they know that an updated copy of the data is available. If a cluster member receives a request for that set of data in the future, it will retrieve the data then.

Disk Offload

When a memory-based cache is insufficient to hold the cached items required by your application as previously described, you can enable the Dynamic Cache to overflow the LRU cache items onto a disk. WebSphere has implemented this overflow-to-disk capability using a technology called Hashtable On Disk (HTOD).
HTOD manages a virtual hash table on the disk in 1 GB file chunks. Individual cache entries are then hashed and placed inside the virtual storage in a manner similar to a memory-based heap. This methodology provides high performance and low latency for disk storage and retrieval. To enable the overflow-to-disk capability, simply select the "Enable disk offload" option on the Dynamic Cache configuration page and specify a directory path for the cache in the "Offload location" field.

1. WebSphere Dynamic Cache: Improving J2EE application performance by R. Bakalova,A. Chow, C. Fricano, P. Jain, N. Kodali, D. Poirier, S. Sankaran, D. Shupp
2. IBM WebSphere Portal for Multiplatforms V5 Handbook – RED Book
3. WebSphere Information Center
WebSphere Application Server Java Dumps
— This article is meant to bring you up to speed on Java dumps and their debugging purposes quickly. It assumes that you?re familiar with basic Java, the Java Virtual Machine (JVM), and threading concepts. Some information about Java dumps and their contents is intentionally omitted from the discussion to simplify things since it?s not relevant to the type of problem determination discussed here.
IBM has just signed an agreement to acquire PureEdge Solutions, a pure-play e-forms vendor known for its focus on secure, XML-based e-forms. IBM's acquisition underscores e-forms' role as a core technology for automating business processes and allows IBM Workplace customers to seamlessly integrate e-forms into back-office systems like ERP and CRM. This acquisition strengthens IBM Workplace and puts IBM in direct competition with Microsoft InfoPath and Adobe LiveCycle. It also gives current Lotus Domino e-forms customers a migration path from proprietary e-forms to XML-based forms. IBM now has a strong e-forms solution that can be leveraged across IBM's collaboration, enterprise content management (ECM), and business process management (BPM) offerings, and is primed to be a strong contender in the growing e-forms market.

Jinwoo Hwang, an IBM software engineer who is a member of WebSphere Application Server Technical Support Team, based in Research Triangle Park, has released HeapAnalyzer 1.3.5 – allowing the finding of a possible Java heap leak area through its heuristic search engine and analysis of the Java heap dump in Java applications.

HeapAnalyzer analyzes Java heap dumps by parsing the Java heap dump, creating directional graphs, transforming them into directional trees, and executing the heuristic search engine.

Java heap areas define objects, arrays, and classes. When the Garbage Collector allocates areas of storage in the heap, an object continues to be live while a reference to it exists somewhere in the active state of the JVM; therefore the object is reachable. When an object ceases to be referenced from the active state, it becomes garbage and can be reclaimed for reuse. When this reclamation occurs, the Garbage Collector must process a possible finalizer and also ensure that any internal JVM resources that are associated with the object are returned to the pool of such resources. Java heap dumps are snap shots of Java heaps at specific times.

The HeapAnalyzer tool can be donloaded here.

If you plan to build a scalable and highly available Website, you need to understand clustering. In this article, Abraham Kang introduces J2EE clustering, shows how to implement clusters, and examines how Bluestone Total-e-server, Sybase Enterprise Application Server, SilverStream Application Server, and WebLogic Application Server differ in their approaches. With this knowledge you will be able to design and implement effective and efficient J2EE applications.
Putting WAS on Unix
— In the last part of this three part series, we're going to look at some advanced topics and further considerations for running WebSphere effectively under UNIX, including monitoring, security and resilience.
Inter-Portlet Communications
— This article demonstrates the steps performed to implement JSR 168 compliant cooperative portlets using IBM Rational Application Developer V6.0 and WebSphere Portal Server V5.1. The article illustrates passing multiple values from source portlet to target portlet without defining complex data type inside WSDL file.


Filed in:
Your Guide to Portal Clustering in WebSphere Portal Server 5.1
— Some things in WebSphere PortalServer work well and are well documented. Other things are well documented and work well in theory. Still other things have okay documentation and will work well when all of the WebSphere stars are aligned. Depending on your implementation, Portal Clustering can fit into all three categories.


Filed in:
The Portal Scripting Interface
— One of the great advantages of the WebSphere software platform is that it's been built with a great deal of flexibility. A product simply wouldn't bear the WebSphere name if there weren't several different ways to do things. WebSphere Portal Server is no exception. With the release of version 5.1 IBM has added another way to administer the configuration of the Portal. This is sure to delight the poor, overworked Portal administrator who doesn't want to learn the art of XMLAccess and wants to avoid the use of a Web-based administration interface all costs.

Filed in:
Authentication in WebSphere Portal
— Enterprise application integration (EAI) is a prime objective driving the decision to implement a portal. Portals are often used to integrate data and applications from remote systems and present them in a unified manner to users through a Web-based workspace. Because these back-end systems can contain sensitive business information and functionality (for example, a company's order control system) or private data (e-mail or employee records), access should be well controlled.


Filed in: