5 Key Considerations for Implementing Web Analytics with IBM WebSphere Portal

If you’re implementing a portal site, chances are that you are also implementing (or at least planning to implement) Web Analytics in order to understand user behavior, measure what works and what doesn’t work on your site, perform A/B testing, and improve your site’s performance/conversion rates.  The information gained by successfully implementing Web Analytics can be invaluable in improving your user experience, assisting with online marketing strategies, and maximizing sales generation leads, which are all key aspects of successfully running your online business.

IBM WebSphere Portal has a built-in framework that integrates with various market leaders in the Web Analytics space, such as IBM Digital Analytics (aka Coremetrics), Adobe Analytics (aka Omniture), and WebTrends.   Portal’s Active Site Analytics (ASA) framework generates page/application metadata which can then be aggregated and sent to your web analytics solution. You can read more about their framework and technical implementation information on IBM’s Knowledge Center.

Here are 5 considerations that can help you successfully implement Web Analytics on IBM’s WebSphere Portal.  The list itself is agnostic to the different Web Analytics solutions; however, the “Guidance” section on each step is based on an IBM Digital Analytics implementation.  Please beware that Web analytics solutions differ in functionality and may have a different method to achieve the same thing.  Additionally, IBM WebSphere Portal is known to have better integration with IBM Digital Analytics, an example being their Analytics Overlay Reports.

  1. Understand who your users really are
  2. Differentiate between a Portal page and a functional page
  3. Support business driven categorization of pages and applications
  4. Allow for applications integrated on the glass to report analytics data
  5. Hidden applications should not be reported unless used

Pre-requisites:

  • Your Web Analytics solutions has already been installed and configured
  • Basic knowledge of JavaScript and HTML
  • IBM WebSphere Portal version 8.0.0.1 or Later

1. Understand who your users really are

A key aspect of a Web Analytics solution is to be able to do customer segmentation / profiling.  Customer segmentation is the practice of grouping users based on similar characteristics (such as location, age, gender, etc).  By default IBM WebSphere Portal doesn’t share much user information when it sends registration/visitor data, so this is likely to get missed.  As part of your implementation process you should first understand how your Web Analytics solution leverages user data, then extend your Portal solution to populate the data needed to segment your customers.  If you’re using the content targeting feature and have already created segments in your Portal solution, ideally you should have the same data to recreate the same segments in your analytics solution.

Guidance for IBM Digital Analytics

IBM Digital Analytics tracks user information using “Registration Tags”.  The aggregator provided by IBM uses this type of tag and sends a visitor ID along with the userName.  My recommendation is to modify the aggregator and add your own user data there via Registration Attributes.  You can retrieve user data using EL beans for example, and then add pass it to the aggregator.

Each registration attribute specified in the aggregator will also need to be specified in IBM Digital Analytics admin section (in order for it to become available for segmentation / Explorer reporting).  Given that you can only use up to 50 registration attributes and the first 15 are special attributes for creating segments, you should pay close attention to which attributes should be in the first 15.  As an example, you wouldn’t create a segment based on first name or last name so they would be good candidates for assigning an attribute number between 16 and 50.

Bellow is a brief sample of how to update the CoremetricsAggregator.js to accomplish this.  This assumes that you have implemented a JavaScript function that retrieves user data.  Also note that I’ve changed what the cmReg cookie stores; instead of a simple “Y” value, I’ve specified the actual visitor ID, this really helps during testing as we’re often using the same browser but using different test accounts.

var processRegistrationTag = function(/*JSON*/ d) {
    var id = single(d["asa.visitor"]);
    if (id && cI("cmReg") != id) {
        var attr = [];
        attr[0] = getUserData().Gender;
        attr[1] = getUserData().PrefferredLanguage;
        attr[15] = getUserData().FirstName;
        attr[16] = getUserData().LastName;
        var regAttr = attr.join("-_-");
        cmCreateRegistrationTag(id, getUserData().Email, getUserData().City, getUserData().State, getUserData().PostalCode, getUserData().Country, regAttr);
        document.cookie = "cmReg=" + id + "; path=/";
    }
};

Within IBM Digital Analytics “Admin” UI, you would then need to set these registrations attributes in the Explorer Attributes section. Please note that Email/City/State/Postal Code/Country are built-in attributes and hence there’s really nothing to set in the Admin UI.

Figure 1: Setting Explorer registration attributes

Coremetrics Registration Attributes

Registration attributes 1-15 should also be specified in the “Extra Fields” section.  This would allow that data to be exported via Standard Data Export.  See IBM Digital Analytics Implementation Guide for more details.

2. Differentiate between a Portal page and a functional page

Depending on how your Portal site was designed, a portal page might not necessarily equate to a single functional page.  Say for example you have an “Product Detail” page, which takes an “Product ID” as a parameter and renders the product details and associated content based on the ID.  To IBM WebSphere Portal, this is a single portal page and that’s how it’ll likely be reported, however to your business users it could be interpreted as multiple functional pages.  From a web analytics perspective, you care more about the actual functional pages since that’s what the visitors see and interact with.

Guidance for IBM Digital Analytics

All portal page titles are reported to IBM Digital Analytics out of the box.  If those page titles matches all your functional pages, you’ll just need to ensure that the title you see on the browser is what’s being reported to IBM Digital Analytics.  There are other situations though, in which you should override the page title with a portlet-generated page title.  For these situations, I recommend to implement a page title override system.

In order to implement a page title override system, you have a couple of options:

  1. Use two-phase rendering on necessary Portlets to override the Page title.  This might be cleanest approach, but it’s a bit complex and offers little flexibility.  For more information please read: “Modifying the HTML head section of a JSR 286 portlet” in the IBM Knowledge Center
  2. Output override metadata from necessary Portlets.  This approach consists of updating the portlets to write metadata that overrides the page metadata and then processing the new tags in CoremetricsAggregator.js.  This approach is very robust as you can override any page metadata (not just title) and simple to implement.
    Sample override metadata:
  3. <div id="asa.page.override" style="display:none;">
        <span class="asa.page.title">Details for Scooter M3 Model</span>
        <span class="asa.page.breadcrumb">Products/Scooter/M3</span>
    </div>

    We would then update CoremetricsAggregator.js parsePage function to apply the override:

    var parsePage = function(/*DOMNode[]*/ ns, /*String*/pageID) {
        var pRoot = byId("asa.page");
        var d = {};
        if (pRoot) {
            parse(pRoot, d);
        } else if (console) {
            console.log("WARNING: Root element not found.");
        }
        // check for page override
        var pRootOverride = byId("asa.page.override");
        var dOverride = {};
        if (pRootOverride) {
            parse(pRootOverride, dOverride);
            if (dOverride['asa.page.title']) {
                d['asa.page.title'] = dOverride['asa.page.title'];
            }
            if (dOverride['asa.page.breadcrumb']) {
                d['asa.page.breadcrumb'] = dOverride['asa.page.breadcrumb'];
            }
            if (dOverride['asa.search.query']) {
                d['asa.search.query'] = dOverride['asa.search.query'];
            }
            if (dOverride['asa.search.results']) {
                d['asa.search.results'] = dOverride['asa.search.results'];
            }
        }
        // update our cache
        pTags = d;
        // communicate data to Coremetrics
        if (!isEmpty(d)) {
            processRegistrationTag(d);
            processPageTags(d);
        }
    };

    In the example above, we allowed four tags to be overridden by a Portlet: asa.page.title, asa.page.breadcrumb, asa.search.query, asa.search.results.  Also note that the individual page tag would’ve been used in the case that it was not overridden.

3. . Support business driven categorization of pages and applications

Categorizing pages (or creating Content Groups) is essential in understand user behavior on the sites.  This is specially true when your site has a large amount of pages that have similar functional areas.  The default categorization provided by IBM WebSphere Portal is a single generic value that doesn’t provide much value to understanding user behavior.  You should empower your business users or content authors to easily categorize pages and applications, thus improving your analysis of functional areas on your site, improving speed to market, and minimizing the impact on the I/T organization.

Guidance for IBM Digital Analytics

The only category reported within the CoremetricsAggregator is “asa.page”.  This category gets reported by every page.  Ideally content authors are able to change categories without I/T involvement, since categorization is driven by functionality or business use cases and can be volatile.  In order to do this you can leverage Portal’s Page Properties, which get syndicated as part of your WCM syndication (assumption that you’re leveraging Portal’s Managed Pages).  This information is specified in the IBM Digital Analytics Implementation Guide (page 106), however the aggregator sample provided does not implement this.

To edit/add a category, a content author would need to go into Edit Mode -> Page -> Details -> Page Properties -> Advanced Tab -> Enter/Update asa_js_PageCatID:
WpPageCat

Update CoremetricsAggregator.js in order to report the category (processPageTags function)

var processPageTags = function(/*JSON*/ d) {
    var pgTitle = single(d["asa.page.title"]) || single(d["asa.page.id"]);
    var query = single(d["asa.search.query"]);
    var res = single(d["asa.search.results"]);
    var cat = "asa.page";
    if (typeof ibm_page_metadata != 'undefined') {
        var catPage = single(ibm_page_metadata["PageCatID"]);
        if (catPage && catPage.length > 0) {
            cat = catPage;
        }
    }
 
    if (res) res += "";
    var attr = [];
    setPageAttributes(attr, d);
    var pgAttr = attr.join("-_-");
    // create pageview tag
    cmCreatePageviewTag(pgTitle, cat, query, res, pgAttr);
    // process analytics tags
    processAnalyticsTags(d, pgTitle, pgAttr);
 };

Note that the cmCreatePageviewTag has been updated to send the new category variable.

4. Allow for applications integrated on the glass to report analytics data.

Arguably some of the most powerful features of IBM WebSphere Portal are the integration capabilities. One of my favorite is Web Application Bridge (WAB), which allows you to surface existing web applications into your site rapidly and seamlessly.  The ASA framework however does not offer APIs or built-in mechanisms for these existing applications to take part of your analytics reporting.  If you have analytics requirements involving these types of applications that integrate on the glass, you should plan on creating some customization to enable reporting of analytics data.  WAB leverages iFrame technologies and each web analytics solution vendor might provide a different way to handle these types of scenario.

Guidance for IBM Digital Analytics

The steps needed to implement this are well documented as part of the IBM Digital Analytics Implementation Guide.  Please refer to section 2.8 “Tagging Frames” and follow the documented instructions.  The key aspect of the solution is that each application being rendered within an iFrame must include the eluminate.js library and cmSetClientID script blocks.  Do keep in mind that you might have multiple environments with corresponding domains/client IDs, therefore the embedded applications would need keep those in-sync with your Portal site.

5. Hidden applications should not be reported unless used

IBM WebSphere Portal provides a hidden portlet layout container that can host portlets that are meant to be hidden until invoke explicitly by an user for example.  Your developers might have also developed portlets that are essentially containers for modal dialogs.  These set of hidden portlets automatically populate the analytics metadata and are reported.  In my opinion, you should not report these portlets until the users starts interacting with them.  From a web analytics standpoint there’s very little value in knowing that the application was loaded on the page; however, there’s significant value in understanding how your users interacted with it.

Guidance for IBM Digital Analytics

This is actually a bit of a challenge, mostly due to how the microformats work today (i.e. they’re specified in the html DOM and parsed/processed at page load time).   There are also a few ways to solve this problem, the one describe below is the simplest but does require tinkering with your applications a bit.

  1. Generate an “asa.portlet.hidden” analytics tag from the hidden portlet
    Sample tag:
<span class="asa.portlet.hidden">true</span>

 

  • Update CoremetricsAggregator.js to ignore portlets that have the asa.portlet.hidden tag
    Updated parsePortlet JS function:

 

var parsePortlet = function(/*DOMNode[]*/ ns, /*String*/ portletID){
    if (!ns) {
    if (console) console.log("WARNING: DOM root node for portlet " + portletID + " not found.");
        return;
    }
    if (ns && ns.length > 0) {
        var d = {};
        copy(pTags, d);
        for (var i = 0, l = ns.length; i < l; ++i) {
            parse(ns[i], d);
        }
        if (!isEmpty(d)) {
            var ptHidden = single(d["asa.portlet.hidden"]);
            if (!ptHidden || ptHidden.length <= 0) {
                processPortletTags(d);
            }
        }
    }
};

 

  • Manually report the portlet element tag directly using the Coremetrics JavaScript API
    From your portlet you would call the following Javascript function when a user opens your hidden portlet or interacts it with it.

 

cmCreateElementTag("Portlet Title", "Portlet Category", attributes);

Summary

I have just provided 5 unique implementation considerations when integrating IBM WebSphere Portal with your analytics solution.  In a feature post, I will discuss additional considerations when specifically using IBM Digital Analytic as your solution, along with some more code samples.  The topics will include: Configuration management, Virtual Portals and IBM Digital Analytics Multisite addition, Testing tools, Video Tracking, and Search.

Advertisements

Provisioning Vagrant Windows Environments with PowerShell Desired State Configuration

In this article, I’ll describe an approach to provisioning using Powershell Desired State Configuration (DSC) in Vagrant.  I’ll deploy a static website to IIS on Windows Server 2012 to showcase this approach.  At the end of the article, I’ve included the final Vagrantfile as well as a DSC Configuration script, but would recommend to read through my steps in order to get a sense on why it was done as such.

On another note, I’m currently employing this setup to develop and test custom DSC modules.  Using Vagrant for this purpose has saved me quite a bit of time and given me and fellow developers a nice development workflow.

Pre-requisites:

Create Windows Server 2012 Vagrant Box

The focus of this article is on provisioning.  However I’m including a summary of steps taken to create a WS2012 Virtual Box using Packer (For information on manually creating Vagrant Windows Boxes see my previous post)

  1. Clone packer-windows Git Repository and copy your WS2012 ISO to “iso” folder in packer-windows (If you’re not using Git, you can just download the latest release of packer-windows)
  2. Update each builder in packer-windows\windows_2012_r2.json with a relative path to your iso and your respective checksum (I used the SHA1 from MSDN)
    "iso_url": "./iso/en_windows_server_2012_r2_essentials_with_update_x64_dvd_4119207.iso", 
    "iso_checksum_type": "sha1", 
    "iso_checksum": "316A2...",
  3. Optional Config:
    1. Disabled Windows Update in packer-windows\answer_files\2012_r2\Autounattend.xml
    2. If not using the Windows Evaluation ISO, Add your product key to packer-windows\answer_files\2012_r2\Autounattend.xml. 
    3. If installing a different Windows Server 2012 other than Standard or Standard Core, you may want to set headless to true in the json file and use the UI to finish the installation (In my case I did this with WS2012 R2 Essentials)
    4. Remove the VMWare builder in the json file if you’re just interested in the Virtual Box VM (or vice versa)
  4. Run command:
    packer build windows_2012_r2.json
  5. Add newly created box to vagrant:
    vagrant box add ws2012e_r2_base windows_2012_r2_virtualbox.box

Setup WMF 5.0 and PowerShellGet

In order to use the new Package Manager “PowerShellGet” we need to install the latest WMF 5.0 Preview and configure a PowerShellGet repository.  For the purpose of the exercise, we’ll configure the current central repository called “PowerShell Gallery”.  These steps assume you have followed the steps in the previous section or you have a windows vagrant box with WinRM communications already configured and working.

  1. Create a new Vagrant project based on your WS2012 vagrant box and enable WinRM
    C:\VMs\>mkdir WMF5VM
    C:\VMs\>cd WMF5VM
    C:\VMs\WMF5VM\>vagrant init ws2012e_r2_base
  2. Edit generated Vagrantfile: Set the communicator to winrm and if using VirtualBox you can give it a name to make repackaging a bit easier.
    config.vm.communicator = "winrm"
    config.vm.provider "virtualbox" do |v|
     v.name = "ws2012evm"
    end
  3. Bring up the VM and RDP into it
    C:\VMs\WMF5VM\>vagrant up
    C:\VMs\WMF5VM\>vagrant rdp
  4. Install WMF 5.0 November Preview on the Guest VM via RDP or the GUI
  5. Configure the guest VM to be part of the same Windows Domain as the host or add your host computer as a trusted host on WinRM or use a wildcard (please note using wildcard is not recommended for security reasons):
    PS C:\>winrm set winrm/config/client @{TrustedHosts="*"}
    
  6. Install NuGet (See Getting Started with the PowerShell Gallery).
    PS C:\>Get-PackageProvider -Name NuGet -ForceBootstrap
    
  7. Trust the Microsoft PowerShell Gallery
    PS C:\>Set-PSRepository -Name PSGallery -InstallationPolicy Trusted
    
  8. Shutdown VM, repackage as a new vagrant box, and add it to the index
    vagrant package –base <your vm as listed in virtual box> –output <location of your new box file>

    C:\VMs\WMF5VM\>vagrant halt
    C:\VMs\WMF5VM\>vagrant package --base ws2012evm --output C:\boxes\ws2012e_r2_wmf5.box
    C:\VMs\WMF5VM\>vagrant box add ws2012e_r2_wmf5 C:\boxes\ws2012e_r2_wmf5.box

Create a new DSC/WebSite Project and Initialize Vagrant

  1. Create a new directory for the vagrant project and initialize vagrant:
    C:\VMs\>mkdir DSCVM
    C:\VMs\>cd DSCVM
    C:\VMs\DSCVM\>vagrant init ws2012e_r2_wmf5
  2. Create folder structure for storing your website files and DSC config/mof/custom module files.  The structure below is a good starting point, but is only meant to be an example.  You may want to do things a bit differently based on your needs.
    MyProject
    ├── Vagrantfile
    ├── DSC
    │   ├── Config
    │   │   ├── <DSC Configuration Files>.ps1
    │   ├── MOF
    ├── MySite
    │   │   ├── index.asp
  3. Create a new DSC configuration file
    1. Add parameters to the top of the file which may need to be passed from the Vagrantfile.  Parameters I’d recommend to start with:
      param (
          [string]$nodeName = "localhost",
          [string]$mofFolder = "C:\tmp\MOF\"
      )
    2. We’re “pushing” the DSC configuration, therefore I suggest to add code to clean up your config MOF folder on every provision
      if (Test-Path($mofFolder)) {
          Remove-Item $mofFolder -Recurse -Force
      }
      New-Item -ItemType directory -Path $mofFolder | Out-Null
      Set-Location $mofFolder | Out-Null
    3. Create a simple DSC Configuration section.  I would recommend to start with simple/out-of-box DSC modules to test your workflow first.
      Configuration MySite {
          param (
              [Parameter(Mandatory)]
              [ValidateNotNullOrEmpty()]
              [string]
              $NodeName
          )
      
          Node $NodeName
          {
              WindowsFeature IIS
              {
                  Ensure = "Present"
                  Name = "Web-Server"
              }
          }
      }
    4. Add a function call to the DSC Configuration function to generate the MOF file
      MySite -NodeName $nodeName
    5. Save as MySiteConfig.ps1 in the DSC/Config folder created earlier
  4. Update Vagrantfile
    1. Configure winrm as the communications mechanism in the Vagrantfile and also forward any ports necessary to test your configuration.  In my case I forwarded the ports RDP (3389) and IIS (80, 443)
      config.vm.communicator = "winrm"
      config.vm.network "forwarded_port", host: 33389, guest: 3389
      config.vm.network "forwarded_port", host: 8080, guest: 80
      config.vm.network "forwarded_port", host: 4443, guest: 443
      
    2. Create a shell provision script that will call your DSC configuration script and generate the MOF file
      config.vm.provision "shell" do |s|
          s.path = "DSC/Config/MySiteConfig.ps1"
          s.args = ["localhost", "C:\\vagrant\\DSC\\MOF"]
      end
    3. Add an inline-shell provision script that starts the DSC configuration based on the newly created MOF file
      config.vm.provision "shell" do |s|
          s.inline = "Start-DSCConfiguration -Path C:\\vagrant\\DSC\\MOF\\MySite\\ -Force -Wait -Verbose"
      end

      Note: Highly recommend using -Verbose when calling Start-DSCConfiguration as you can get a lot of great information, in case things don’t go as expected.

  5. Vagrant up and test the configuration worked by launching RDP and verifying the default IIS website is up and running by using a browser on the host machine
    C:\VMs\DSCVM\>vagrant up --provision
    C:\VMs\DSCVM\>vagrant rdp

    After your VM is running, go to http://localhost:8080/. You should see the default IIS page.

Using Third-Party DSC Modules

The true power of DSC comes with the large amount of DSC modules that are being published and shared everyday by Microsoft and the PowerShell community.  Many of these modules are being uploaded to the central repository: PowerShell Gallery.  In this section, we’ll leverage the package manager PowerShellGet to install the xWebAdministration DSC module, that allows us to manage IIS Web Sites, App Pools, etc.

  1. Install xWebAdministration module.  We’ll create another provisioner script to take care of any DSC module dependencies for us (similar to how Berkshelf works with Chef, although we’ll just create a simple script to do it).
    In the Vagrantfile define the script prior to Vagrant.configure

    $dscModDepScript = <<SCRIPT
        Install-Module -Name xWebAdministration -Version 1.3.2.2
        Get-DscResource
    SCRIPT
    

    Note: Get-DscResource is a nice way to confirm you’ve loaded all the necessary modules.  It’ll print all modules loaded on the Vagrant output screen. You only have to call it once.
    Call the script (prior to any DSC provisioner)

    config.vm.provision "shell" do |s|
        s.inline = $dscModDepScript
    end
    
  2. Import the DSC module.  Now that we have a DSC module installed, we can start using their resources in our DSC configuration.
    Import it prior to the Node script block

    Import-DscResource -Module xWebAdministration
    Node $NodeName {...}
    

    Use any DSC resource from that module. In my case I’ll use the xWebsite DSC Resource.  In addition to this resource I’ll use the File DSC resource to copy any files/folders in the MySite folder to the default IIS website.

    File WebProject {
      Ensure = "Present"
      SourcePath = "C:\vagrant\MySite\"
      DestinationPath = "C:\inetpub\wwwroot"
      Recurse = $true
      Type = "Directory"
    }
    
    xWebsite DefaultSite { 
      Ensure = "Present" 
      Name = "Default Web Site" 
      State = "Started" 
      PhysicalPath = "C:\inetpub\wwwroot" 
      DependsOn = @("[WindowsFeature]IIS", "[File]WebProject")
    }
    

    Note: I’ve hard-coded the SourcePath above.  Ideally you’d pass this as an argument to your configuration, which I’ve done in the final script at the end of the article.

  3. Develop an index.html or copy your favorite static html site to the MySite folder
    <html>
    <head><title>DSC Site</title></head>
    <body><h1>Hello DSC!</h1></body>
    </html>
    
  4. Run vagrant provision again (or vagrant up if your vagrant VM is down/destroyed)
  5. Test your new site (If using my sample, you should see Hello DSC! by launching a browser on your host machine and going to http://localhost:8080)

Food for thought and final files

  • As I stated earlier, I used this setup to develop/test custom DSC modules.  The dscModDepScript in the Vagrantfile below shows how to install your custom DSC modules (Get-DscResource would take care of loading them as well).
  • The DSC  configuration file might need a bit of tinkering if you were to use it with something like Microsoft Release Management (this file is optimized to work with Vagrant)
  • The latest version of Vagrant addressed some issues around passing arguments to PowerShell scripts as well as RDP.  Highly recommend that you install the latest version, as the sample Vagrantfile would not work.  In addition I’ve used all 3 types of shell provisioning (inline, inline+variable, path) and passed arguments to it (as an example of how this works with Vagrant)

Vagrantfile:

DSC Config File: DSC/Config/MySiteConfig.ps1

Up and Running with IBM Script Portlet

In this article, I’ll describe how to install the IBM Script Portlet on IBM WebSphere Portal 8.5 and create scripts that leverage some of the key features of the Script Portlet.  The script portlet allows developers with primarily client-side development skills (HTML/CSS/JavaScript) to develop portlets rapidly, both offline on their favorite editors/IDEs and within WebSphere Portal itself with a jsfiddle-like experience.  To find out more information about it, visit the IBM Greenhouse Solution Catalog and the IBM Knowledge Center.

Pre-requisites:

Install IBM Script Portlet

The instructions on how to install are well documented by IBM in the Knowledge Center.  Below are my notes on how to install.

  1. Start WebSphere_Portal server
  2. Unzip “IBM Script Portlet for WebSphere Portal V1.1.zip” to a temp location
  3. Install PAA: From your shell, navigate to <wp_profile>/ConfigEngine and run the following command
    C:\IBM\WebSphere\wp_profile\ConfigEngine> ConfigEngine.bat install-paa -DPAALocation=C:\temp\ibm\scriptportlet-app-1.0-SNAPSHOT.paa -DWasPassword=<password> -DPortalAdminPwd=<password>

    Depending on how WebSphere Portal was installed you may or may not have different accounts for WAS administration and Portal administration.  Make sure to have the correct passwords.

  4. Deploy PAA: From the same location, run the deploy-paa task
    C:\IBM\WebSphere\wp_profile\ConfigEngine> ConfigEngine.bat deploy-paa -DappName=scriptportlet-app -DWasPassword=<password> -DPortalAdminPwd=<password>
    

    Note: On IBM WebSphere Portal 8.5 there’s no need to modify the theme per the readme.  That step only applies to IBM WebSphere Portal 8.0.0.1 with CF11.

  5. Verify installation and setup a test portlet:
    1. Open up a browser and navigate to your portal server.  In my case it’s http://localhost:10039/wps/portal
    2. Login with as the Portal administrator
    3. Enable Edit Mode
    4. Click Create -> Page -> Choose a template -> Enter page details -> Click Create Page
    5. Click Create -> Applications -> ‘Web Content’ tab -> Select Script Portlet -> Click Add to Page or Drag it to the desired location ScriptPortlet
    6. Click on the Edit link inside the Script Portlet and Verify it comes up

Notable WCM Tags in Script Portlet Development

The IBM Script Portlet components (HTML/JavaScript/CSS) are all stored within WCM, which means script portlets can be included as part of projects, deployed to other environments as part of syndication, and developers can leverage many of the WCM tags available.  I used a few tags in combination with the AngularJS TodoMVC sample application  to showcase how it can be used on common portlet development tasks, as well as other interesting use cases.  To see a list of all the tags available, please visit the IBM WebSphere Portal Product Documentation.

Data Management

Private/Public Render Parameters

Plugin to retrieve portlet render parameters. In the example below, the private portlet render parameter allTodos is retrieved.

[Plugin:RenderParam key="allTodos"]

Request Attributes

Retrieves/sets request attributes. Useful for temporary variables.

[Plugin:RequestAttribute key="key1" defaultValue="value1"]

Session Attributes

Plugin useful for managing data in the portlet session. The example below gets a render parameter and sets it as a value for the session attribute “todosInSession

[Plugin:SessionAttribute scope="servlet" key="todosInSession" defaultValue="" mode="set" value="[Plugin:RenderParam key="allTodos"]"]

Portlet

Retrieves portlet information including portlet preferences.

[Plugin:Portlet key="preferences" preference="favoriteColors" separator=";"]

URL Generation

Render URLs

Plugin to generate portlet render URLs. See documentation on how you can set parameters within the plugin itself. In the example below I use a FORM GET to add the parameter as query strings in the URL which sets it as a private render parameter by default.  In general the approach below works well for parameters with small values, since there’s a limitation of how large URLs can get.  The example below is for illustration purposes only, you may not want a large amount of Todos as part of the URL or to store Todos in Session 😉

<form id="form1" action="[Plugin:RenderUrl copyCurrentParams="true"]" method="get">
<input name="allTodos" type="hidden" value="{{todos}}" /> 
<button id="persist-todos" type="submit">Save</button>
</form>

Please note that the ActionURL rendering plugin is not available within the Script Portlet.

Resource URLs

This plugin allows you to construct URLs with query parameters as well as proxy the resources through WebSphere Portal’s AJAX proxy. This is very useful when consuming external REST services while complying with the browser’s same origin policy.

[Plugin:ResourceURL url="http://todomvc.com/architecture-examples/angularjs/bower_components/todomvc-common/base.js" proxy="true"]

Others

There are many other tags available, so I highly recommend reading the IBM documentation to find out all you can do.  It’s also possible to create your own custom rendering plugin, which can then be used by the script portlet.   Other personal favorites are the conditional plugins:  EqualsNot EqualsMatchesOtherwise

TodoMVC Code in JSFIDDLE

Click here to see the modifications made to the TodoMVC app.  You should be able to copy/paste into your own Script Portlet and test out some of the features.

Creating a Windows Box with Vagrant 1.6

The latest Vagrant release added out-of-the box support for Windows VMs.  Below are my notes on creating a new vagrant box, and then using WinRM and RDP to connect to a Windows 2008 Server VM.

Pre-requisites:

  • VirtualBox 4.3 or above Installed
    Add installation path (“C:\Program Files\Oracle\VirtualBox”) to your PATH environment variable
  • Windows ISO file
  • Vagrant 1.6.3 or above Installed

Create a VirtualBox Windows VM

The instructions use the command line interface. You are welcome to use the VirtualBox GUI as there’s really no difference in the end result.  You can also skip this if you already have an operational Windows VM in VirtualBox.  See this post on “Create/Manage VirtualBox VMs from the Command Line” and VirtualBox VBoxManage documentation.

Alternatively you could also get a base box from packer-windows (it only configures SSH currently, but they are working on adding WinRM support).

  1. Create/Register the VM
    C:\>VBoxManage createvm --name "w2k8_r2_base" --ostype Windows2008_64 --basefolder C:\VMs\ --register

    Use VBoxManage list ostypes In order to get the possible OS types you can use.  Also note that 64bit OS may not show up on this list if Virtualization Technology (Intel VT-x, AMD-V, etc) is not enabled in the BIOS.

  2. Configure VM according to your needs
    C:\>VBoxManage modifyvm "w2k8_r2_base" --memory 2048 --cpus 2 --vram 128 --acpi on --boot1 dvd --nic1 nat

    Vagrant requires the first network interface to be a NAT adapter

  3. Configure Hard Drive
    C:\>cd VMs\w2k8_r2_base
    C:\VMs\w2k8_r2_base>VBoxManage createhd --filename ./w2k8_r2_base.vdi --size 30000
    C:\VMs\w2k8_r2_base>VBoxManage storagectl "w2k8_r2_base" --name "SATA" --add sata
    C:\VMs\w2k8_r2_base>VBoxManage storageattach "w2k8_r2_base" --storagectl "SATA" --port 0 --device 0 --type hdd --medium ./w2k8_r2_base.vdi
    
  4. Configure DVD Drive and point to your installation ISO file
    C:\VMs\w2k8_r2_base>VBoxManage storagectl "w2k8_r2_base" --name "IDE" --add ide
    C:\VMs\w2k8_r2_base>VBoxManage storageattach "w2k8_r2_base" --storagectl "IDE" --port 0 --device 0 --type dvddrive --medium C:\ISOs\Windows2008\W2K8_R2_64bit.iso
    
  5. Install Windows OS
    Using VirtualBox GUI start the newly created VM and install/setup Windows according to your needs.
  6. Detach the Installation ISO file
    C:\VMs\w2k8_r2_base>VBoxManage storageattach "w2k8_r2_base" --storagectl "IDE" --port 0 --device 0 --type dvddrive --medium none
  7. Install Guest Additions
    C:\VMs\w2k8_r2_base>VBoxManage storageattach "w2k8_r2_base" --storagectl "IDE" --port 0 --device 0 --type dvddrive --medium "C:\Program Files\Oracle\VirtualBox\VBoxGuestAdditions.iso"

    Using VirtualBox GUI, open the DVD drive on the Guest VM and install VirtualBox Guest Additions. After you’re finished detach the ISO using the command on the previous step.

Configure WinRM & RDP

  1. Setup WinRM for remote management on Guest VM
    By default WinRM might not be configure for remote management on your newly created Windows VM, the following command enables it.

    C:\>winrm quickconfig -q
  2. Update WinRM config on Guest VM
    I used the following settings (documented on the knife windows chef plugin)

    C:\>winrm set winrm/config/winrs @{MaxMemoryPerShellMB="300"}
    C:\>winrm set winrm/config @{MaxTimeoutms="1800000"}
    C:\>winrm set winrm/config/service @{AllowUnencrypted="true"}
    C:\>winrm set winrm/config/service/auth @{Basic="true"}
    
  3. Enable Remote Desktop
    On Guest VM, Remote desktop can be enabled by going to “My computer” properties, click on “Remote Settings”, and select “Allow connections from computers running any versions for Remote Desktop”
  4. Ensure Windows Firewall Allows WinRM/RDP traffic
    On Guest VM, open “Windows Firewall with Advance Security” and make sure that the Inbound Rules for “Windows Remote Management (HTTP-In)” and “Remote Desktop (TCP-in)” are enabled.

Create a Vagrant Box

  1. Shutdown the Guest Windows VM
  2. Package the Windows Box
    C:\>cd VMs\VagrantBoxes
    C:\VMs\VagrantBoxes>vagrant package --base w2k8_r2_base --output w2k8_r2_base.box
  3. Add the box to Vagrant
    C:\>vagrant box add w2k8_r2_base C:\VMs\VagrantBoxes\w2k8_r2_base.box

Vagrant Up

  1. Initialize Box
    C:\>cd VMs\VagrantVMs\w2k8_r2_vm
    C:\VMs\VagrantVMs\w2k8_r2_vm>vagrant init w2k8_r2_base
  2. Edit Vagrantfile
    By default winrm is not setup (ssh is the default); you need to explicitly set it.  In addition, I didn’t use the default vagrant user/password, so I set those as well.  Lastly make sure to forward the RDP port.

    # -*- mode: ruby -*-
    # vi: set ft=ruby :
    
    VAGRANTFILE_API_VERSION = "2"
    
    Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
     config.vm.box = "w2k8_r2_base"
     config.vm.communicator = "winrm"
     config.winrm.username = "Administrator"
     config.winrm.password = "yourpassword"
     config.vm.network "forwarded_port", host: 33389, guest: 3389
    end
  3. Vagrant Up!
    C:\VMs\VagrantVMs\w2k8_r2_vm>vagrant up
  4. Remote-in to your new Vagrant-managed VM
    C:\VMs\VagrantVMs\w2k8_r2_vm>vagrant rdp

    * In Vagrant version 1.6.3 there was a regression introduced around the rdp command (See issue #3973).  I used my regular Remote Desktop client manually and all worked fine.

At this point you should have a Windows Box all set with Vagrant.  In a future post, I intend to discuss provisioning software with Chef.  If you ran into any issues following these instructions, please add those as comments below.  Thanks for reading!

Be the witness to your failures

Success, after all, loves a witness, but failure can’t exist without one.

– Junot Diaz, The Brief Wondrous Life of Oscar Wao

Kicking off with a great quote from the book that inspired the title the blog,  “The Brief Wondrous Life of Oscar Wao“, written by my friend, Junot.  If you haven’t already, go buy yourself a copy; it’s indeed a terrific read that you won’t regret.

As a software architect, this quote really resonates with me. It’s inevitable that software projects will have issues; there are just too many factors that influence this. Yes, there are issues that architects have little control of, but let’s put those aside for now. What matters is to recognize the issues we do cause, being the witness to our failures.

Today’s technology gives us almost endless choices, and is constantly evolving. There’s always room for improvement. There are unexpected changes. This is what we have to deal with. You could say failure is always around the block, waiting for us. But when we turn into that street, which will happen, what differentiates us is to quickly realize that we just made the wrong turn, understand the path that got us there, and avoid that street later. If we’re lucky, we are the first witness to our failures, but when we aren’t, we need to take ownership of it. Understanding this plays a critical role in our success as software architects.