Moving Forward by Thinking Backward
Security is very hard to get right and even one mistake can be disastrous. Therefore it is essential for any enterprise to master security.
THE PROBLEM
We delivered GUIDE (graphical user interface for department employees), our first composite application framework in 1996. It was the enterprise's first attempt to build "tools". GUIDE was to make things easier by guiding a user to a successful outcome. The solution followed industry best practices at the time. GUIDE had a very complicated platform based security model. All security checking was done on components in the app server tier. Every component that got developed or updated involved a security assessment. The organization wanted a universal security model for external customers as well as internal customers. This involved the configuration of a customer registry (similar to a meta-directory) to be integrated within the app server.
The security implementation of GUIDE showed that following current processes and best practices will not always be the best way for an enterprise to achieve its potential.
THE OPPORTUNITY
GUIDE was a business success, but an operational failure. The business loved it, but IT could not leverage it for more success.
When the next large project (e-MPIRE, a $120M transformation project) was proposed In 2002 the organization focused on learning the lessons of GUIDE. The goal was to build tools, but in a simplified and repeatable way. Security simplification was a priority.
At the same time, the project vendor wanted to write their own security module at the application level.
THE SOLUTION
As with any project the leadership team set certain technical parameters. One was using an external directory of all departmental, state, and external users. If we used the same implementation as GUIDE this directory could serve as a metadirectory.
This is where we decided to imagine an end state security model. It seemed that all security should be externalized and loosely coupled with resources (platforms). This implied that we should use the directory as a credential provider. But how to define the resources and how would they be linked? This is where my mentor (Bill Carr) said something profound. If everyone solving the same problem gets the same tool, isn't the tool the only resource? (from a user consumer). In other words, the tool is the security proxy to all of the resources in the organization. The impacts of this are game changing. Security is at the edge (policy server, now API gateway), the app server has no users, zero trust can be easily implemented, and development doesn't have to implement security (in fact they never know how the resources they develop will be used until configured by the business).
Users were given roles as attributes; tools (functions) were then allocated to certain roles. There was only a singe primary policy - is this user is an appropriate role to access this tool (there may be other attributes required in the signature). The whole solution (thousands of tools) only has 6 poilicies. 5 exception policies that were allocated to everyone (e.g. Home page) and the policy based on tool signature.
This implementation reduced security help desk calls by 82% and help reduce development time by 80%.
METHODOLOGY IMPACT
If the team had not considered what an end-state security model would look like, they would have implemented along the industry practice of platform/application based security. Instead the application took a trajectory towards the desired end state and facilitated continuous innovation.
Architecture is the quest for elegance
Occasiionally mandates are handed down from on high. The Governor decided he wanted a single email system for the state. He wanted one address book and he wanted all agencies converted as quickly as possible.
THE PROBLEM
The agency I represented has a very mature and highly efficient email system. The agency was not a Microsoft Outlook/O365 shop. In addition to having email on the platform, our agency has another couple of hundred applications written on the platform and integrated with email. These applications were out of scope for the conversion.
THE OPPORTUNITY
One of the goals of the conversion was to save money. A conversion "expert:" was brought in and did an assessment of our email system. The conversion would cost $400K and take several months just to move the emails. Then all data since the beginning of the conversion would have to be revisited at final conversion time. This was also going to take a lot of my staff time in validating the conversion when they had a lot of other work related to conversion. We needed to keep the server and some clients (10-15) to support the applications not converted (on launch a user would get a pool license which could be tracked).
Our solution was simpler. Configure the Outlook client to point to O365 for new emails while also pointing to the old repository. The old repository has to stay to support the applications, so this was really no cost (actually $20k). The upside is we saved at $400K and resources during conversion, we got rid of $1.2 M of clients (a year earlier) , the users got the end state they wanted (and could move emails around), the governor got his single email, and it went to production 6 months earlier and without the problems of conversion.
This streamlined approach freed resources to investigate, standardize, and leverage new capabilities that came with the new product.
THE SOLUTION
During the discussions with the vendor on conversion, they told us of their experiences. We has 20 years of emails to convert and the potential conversion problems were daunting. In addition, we were going to find ourselves in the middle of a political issue. By keeping it simple, I had happier customers, stakeholders, and I saved money and resources
Points of Agility
Database changes are a primary hinderance to agility.
THE PROBLEM
I represented a Tax Agency. We collected revenue in 18 Tax Types. There were more than 1,500 forms across all taxes. A pure relational model would have required over 2,000 very volatile tables (tax law changes constantly).
THE OPPORTUNITY
The goal of our system was agility. How do you support business change environment when the data is so dynamic. Historically, we processed very little of the data; we only validated and used what was needed to support calculation. All other data were passengers used for reporting and analytics (cleansed by need). The new system was going to process all of the data. In the legacy system it took months to get the data structures ready for the new tax year. Adding all the "informational" data would make those processes untenable.
What we noticed was that the data was volatile, but the information was consistent. (see Data vs Information tirade). The core business "object" was the filing. All the forms filed together. A user can't look at one form and say the filing iis correct, they need all the forms.
THE SOLUTION
It was decided that instead of building forms tables that we would build one filing table. The table would contain the filing object in XML (compressed, not tags) format. One observation was that even though the filing usually contained about 7 forms (the return, income forms, etc.), the long forms were sparsely populated. A descriptive format (like XML) allowed us to store only populated information. The processing was then simplified because the business could write all the computational rules (every piece of the filing DOM was addressable). The user could test rules "off line", all they needed was a filing object.
There was only ever one filing table for all tax types. It had some relational key data (taxpayer, year, period, tax type) that was the metadata for the filing. There was a metadata field added for search allowing us to map multiple taxpayers to the filing (joint, business partners). It supported a Google like search capability.
The filing and rules engine were coordinated to keep all changes to the filing with the object. This allowed us to see the filing at any point in time. The filing became a digital incarnation of the old filing folder that kept all paperwork for a filing.
This new database was not only more efficient, but reduced the size of the filing DB by 40%! This is because there is only one set of indices (and not dependent data tables) and the sparseness of the data. Processing required only one write of the data to only one table.
We created a second update (work) table to support check out/check in. When a user needed to update a filing (an auditor), we would copy the filing out of the operational DB into a work table. If another user wanted to update the same filing it could not be stored in the work table. They would be informed over who was working on the filing and for what purpose. This eliminated the backdoor update issues of the past.
This simple implementation simplified annual cycle changes and allowed us to reduce that project time by 50%. Complete new tax filing systems can be written in weeks.
This clear value of this approach was then leveraged in our web filing design where an XML document supported all of the pages of an application as a segment. This allowed one table to support all web apps, allowed customers to pick up where they left off (the XML knew what was populated), and allowed us to track customer drop out rates. It then allowed us to publish an API for partners to file returns.
Agility in data structures and moving to informational rather than data driven design, will provide organizations with the agility they desire.
NOTE: In the product driven EA model, there are enterprise specific domains. Each of these may have a domain object that falls into this type of solution. For example, the Tax returns processing domains object is the filing. What is the domain object for financial accounts? Maybe a ledger or spreadsheet?
Points of Agility
The most high volume system in our modernization was the daily processing of NYS Personal Income Tax filings.
THE PROBLEM
We receive over 10 million returns in a 3 month period. Daily peak volumes were over 600K returns per day. About 10% of these returns fell into categories of exceptions. This means that manual intervention must occur before the return can be processed. There are many types of exceptions - taxpayers can't be validated, math errors, missing forms, unmatched entered data (your entered W2s dond't match your employer, etc.). New exception issues occur all the time, and being able to adapt is essential.
Refunds out of this system are an economic engine for the state ($4.5B). By law refunds must be paid within 3 weeks of return receipt or interest must be paid. It is the one system that is immediately news ready because it impacts so much of the state.
THE OPPORTUNITY
There were multiple goals of the new system. It was crucial that the system support modern high volume transparent processing with an ability to quickly adapt. The system had to optimize the agency's ability to work and adapt to new exception types.
This was Release 3 of our modernization so we had recently introduced several new technologies. It was decided that we would use a newer version of our BPM engine to support the needed modernization.
THE SOLUTION
At the time of this release, all returns processing systems were batch. The daily received files would be processed together through a batch schedule. If anything systemic happened, the file would have to be able to pick up where is went wrong. The batch schedule for personal income tax often took multiple days. Up until that time it didn't even process all the data the organization wanted.
We decided to use our BPM as a process engine. It essentially would link all our services as single automation (in memory micro-flow). Each return would flow through 31 services on 4 platforms as a single unit of work. If there were issues, it would be passed on to exceptions where it would be corrected and then returned to the process. Each return takes about 2.5s to process. As we added concurrent threads, this slowed down significantly. We decided at peak to run 40 concurrent threads resulting in 75K returns/hour (later I was told they found the contention point and it runs at about 125K). The elasticity of adding and decreasing processing power was another benefit. As volumes increased we could add threads (processors).. This resolved the historical problem of the agency having to pay for fixed processing power built around peaks. It also made the application truly "cloud" ready.
To deliver a MPP (massive parallel processing) application requires a reduction in points of contention. This includes data hotspots (this was reduced by the XML only being one write), pipes (connections), and even processing power. It not only increased processing, but it became more transparent and traceable. It could easily change because it was a microflow. The was not persistence. It ran and was done. This means any change (to a service or the flow) can happen without impacting any on going work. (in some cases work was rerun through the new rules). It allowed for future online, immediate income tax processing.
BPM and process patterns were used to bring agility to exceptions. Initially the business established an instantiated list for each exception and created a tool tied to each. This was to simplify monitoring and list management. What we soon found out was that this did not give us all the agility we needed. The user needed to create new lists and manage them. Initially IT has to be involved to physically create the new list before it could be configured with a tool. The business fact was that there would always be new things that would go wrong and they had to adapt to. What we ended up doing was creating one list of excpetions that included an attribute for exception type. This exception type was then configured to the tool. Exception types could be added on the fly and immediately routed. It became even more efficient when the product vendor had us use query tables for list efficiency. Monitoring and management were as efficient. An example of this power was communicated to be by the acting commisioner years later. She said that a new exception was causing such problems in started getting reported in the news. She asked for a report on the scope of the problem. When she got the report the next day, she called the director for all exceptions and asked what he was going to do about it. He said they fixed it yesterday - created a new liste, moved the exceptions, and assigned staff to that particular problem. The business could react to a problem as fast as it could be reported.
The combination of these two decisions resolved what was determined as the two biggest hurdles to agility - database changes (the filing) and process change (BPM and exceptions). The process engine simplified another dimension of change. The annual cycle development cost was reduced by $1.5M, the annual system development time was reduced by 50%, and exception processing time was cut by 35%.
Before any implementation, determine how to simplify change. Business transformation is based on the ability to adapt and innovate..
Capabilities Foxcus
Every year or two a technology comes along that captures the industry imagination (AI, Blockchain, etc.). Blockchain was one such technology and every organization raced to do a pilot.
Government was no exception. Almost all of the pilots were (by The Right Strategy) ill conceived. They almost all had the thought of the value of an unbreakable, non-reputable ledger. This could have been delivered by that ledger technology independently. Yet they wanted to use Blockchain.
Blockchain is based on the fact that the parties are equal negotiators. That their conceptual agreement validates the transaction. That is not how the government (and most private entities) work. Government is the authority of the transaction. If a person buys a car from another person, the transaction ownership is "questionable" until the state says so (there may be taxes involved).
Using a ledger would provide the same capability at a less cost (70%), less processing time, and higher performance.
Would the government every use Blockchain? Maybe as an external IdP (identity provider like Soverign ID) or more likely as part of a government digital ecosystem that allows negotiations between equal partners (federal, state, and local).
From the Right Strategy perspective the focus should only stay on capability (non reputable ledger) in the most independent and flexible implementation.
Capabilities Focus
BPM and Business Events are the critical components in the Right Time leg of the Right Strategy. It is essential for any organization's responsiveness to be able to leverage these capabilities in a consistent way.
At Tax, we took a BPM product and determined from a business viewpoint what services we needed - assign a workitem, Complete a Workitem, Escalate, Route, etc. We determined there were only seven we needed. Within the BPM product some of these capabilities required the combination of multiple product services (assign might be made up of claim, add employee and org, etc.). Creating high level services created many benefits. It was easier to change engines, we could integrate them as icons on our UI (for complete, etc.), programmers were insulated from BPM specifics, it was easier to tune and debug, and it simplified stakeholder conversation. These services were then delivered in consumable implementations. Some were button configured on the tabsets, others were pages and direct services. The power of this can be seen in the dozens of processes automations completed handling tens of millions of transactions and workitems.
At the same time another "sister" agency was attempting to transform and was using the same product. They were convinced that their SOA approach was superior and to that end they created 67 BPM services. These services encapsulated all the serives the BPM product supported. What they didn't realize is that now programmers had to put these serives together to meet their business needs. Each programmer would need to do it exactly the same way from a business perspective. They got one process coded with hundreds of transactions a day, but it was impossible to support and was dropped. I think they dropped the product.
The problem is common. One agency focused on technology; the other on business capability. Business capability is the only way to achieve agility. In the end nobody really cares what technology is used as long as it works (not only funciionally, by operationally and strategically).
Capabilities Foxcus
Document Management is a component of almost every organizations architecture. Even in a totally digital business documents often have to be produced for client consumption. An eCommerce order is often presented as a document to a customer.
Document Management systems have evolved over time. It is not just storing or retrieving documents, they have included BPM and other services.
There are a billion documents in our document management system. Personal income tax 10M filing, 7 page average, 20 years. That is only one tax type. We also kept copies of every produced correspondence with a taxpayer (bills, notices, etc.).
There were three repositories - one for physical documents (a standard DM), one for XML filings (database), and one for out going correspondence (need becasue stream was for high volume print).
From a consumer perspective they didn't need to know the specific implementation of the asset they wanted. We created a single service that give the metadata for an asset, we would return it in PDF format. The XML filing would look like it has been filed on paper. We then created an icon that could be configured on any page to bring back the asset.
The business capability was on one thing, show me the asset.
The future may be to store these assets in read only storage on the cloud and access them by metadata. They can be moved to very cheap storage as most documents are only accessed in their early "life".
The lesson is to focus on capability and allow implementations to evolve over time.
Top Down Guys Won
The business component have been the architectural foundation for all systems since 1996, but the story started in 1990.
As with all enterprises, government is consumer driven. Every piece of work (a filing, a payment, refund, account resolution, audit, collection, etc.) involves getting the customer right. This may seem easy but in our situation getting the right name and address for a customer is a critical challenge. Our customers were individuals, businesses, and partners (accountants, etc.), We handled 18 different taxes. A business could have different DBA names by tax type and different physical and mailing addresses. Alternate ids had to allow access for typos on filings. If a taxpayer incorrectly key their tax id, we would correct it, but anyone with the paper in their hand would have to access the filing by miskeyed id. There were many complications in finding the right customer based on business context.
To solve this problem initially we wrote common code that could be inserted in any program. This was over 2,000 lines of COBOL with more that 50 database calls. Every program using the module had to compile the shared code in. By our first release in 1989 it was in over 500 COBOL programs. Every change to the module required recompiling everything.
To resolve this we moved the code to its own module. All programs just had to replace the copy code with a call to the module that looked exactly like the internal call. The executable module could be called both online and batch. Within a year it was called 500K/day online and had over 10M calls in batch. But there were other benefits. It was more efficient (online it was cached). More significantly we found that we could change it faster and without major impact. The language it was written in was changed, the database it was using changed twice, new capabilities were added and none of the consumers knew it. The database was optimized to handle these requests. This component handled 90% of the customer domain requests and the database was tuned to support it. The interface rarely changed because the underlying business question remained - based on this context (metadata on the request) give me the right name and address for this customer.
It was so successful, that in 1996 we standardized all functional accesses to components. We standardized every interface - request, reply, error (same as EJB and most interfaces today). Error handling was the same for every call. By the time we came to modernization in 2004 we already had over 1,000 highly used business services.
As we moved to internet, all the same services were immediately available for consumption (we did have to make a change to make data formats platform indepenent - EBCDIC - ASCII). At the same time we took other platform data and made it available to all platforms (Windows, Unix, IVR). The initial implementation was queue based, but it was point to point. Since everything was standardized (interface to transport to capability to return), there was no need for mediation or ESB. This meant that all services could be implemented on any platform, language, or within any technique. The important thing is that the primary interface was based on business consumption.
As we brought other products in (document management, security, correspondence, analytics, etc.) interfaces for consumption based on our standards were used giving us product indepenence, faster time to deployment, easier problem resolution, and an out strategy.
When we got to our modernization leveraging these assets rather than starrting over was a big issue. The consultant wanted to start from scratch so they had something to sell to the next client; we wanted the system to be developed quickly and knew we had assets that could be leveraged. We decided to use the current modules and at the same time align all new development on new platforms to the internal standard. Gateways, new ESBs (BPM solution), and queues were added to our stack. This was done to add better transactional capaiblities to the system. Our services then became what we called SmartServices. We would generate the implementation class needed to call the service (could be REST, API, etc.). Write once, use everywhere. The transactional requirement drove some of the implementations and the consumer could determine which implementation they desired.
Consumption will always be more important than implementation.
The organization has thousands of cataloged services that are invoked 10s of millions of times per day. They all get consumed the same way and are similarly tracked and monitored.
Note: Some will say this is SOA and it is. It does remove the middleware of SOA and extends options. It resolves some of the issues with microservices in that it establishes the correct consumable breakdown as business based (it can be microservices or made up of microserives). AI might resolve some of the microservice spread by creating its own capabilities out of them (something business hadn't thought of), but still there will every only be two consumers - a person or a process/agent/thing.
The fact that building isolated consumable capabilities is key to agility.
Investigate - Standardize - Optimize
Our first foray into creating an unified enterprise UI was call GUIDE (Graphical User Interface for Department Employees) (as mentioned in ATLaaS Security). GUIDE was a technical achievement. It took considerable heroic programming. The customers loved the tooling concept and wanted more. The downfall of GUIDE was that it was hard to maintain, difficult to leverage to new problems, had a very complex security solution, and provided very few leverageable assets.
With the e-MPIRE transformation the intent was to get the same value in a extensible way. We needed a way to create reusable UI objects that could be easily composed into tools. The key concept that facilitated this was separating the duties of the UI between Context and UI Business Capability.
Context is established in most systems by performing search. The customer gets established and then can be served. The context is then tightly integrated into the functionality of the system. This internal dependence, while efficient, raises the cost of maintenance and innovation. Alternatively the solution implemented would separate context into its own object. The context object maintained all the parameters that were then passed to pages. This allowed for the standardization of the pages. Pages were written with a defined signature that could accept context parameters. Pages could be wtitten independently and without specific knowledge of usage. Pages could then be assemebled into tools (tabsets) that were designed to resolve spefic business needs. This standardization resulted in an 80% improvemet in delivery time, simplified maintenance, and allowed for massive concurent development. There was never a need for refactoring and it minimized development outages. Testing was simplied by allowing for individual page testing (tabset of one page) since cross page interactions were eliiminated (there were some subpage linkages, which acceplted a combined context).
Since all the pages were standardized, optimization could be achieved by creating a configuration tool to create the tabsets. A user could create a tool by drag and dropping pages into a tabset. Security was assigning those tools to business roles. The configuration was left in the hands of business experts with little need for IT engagement. Click stream analysis is always within the context of the function performed (and by whom).
The standard page and tabset navigation resulted in many benefits. The training of new staff was reduced by 80%. Call center staff expertise saw the same outcomes. The system was intuitive, helpful, and consistent.
BABE (Business Activity Builder Extension) was added to the configuration to tie the tool to workflow. This was accomplished by adding new features to the configuration tool. The standardization of the pages were leveraged to allow the business to add completion criteria to the tool. All page actions were registered and could be evaluated to see if the proper activities had occurred before the workitem could be completed. BABE context replaced the need for search. Today the agency has over 1,200 BABE tabsets. The elimination of search has the additional benefit of vastly reducing improper access. The user only sees the data they need to perform an assigned business function.
If the pages were written to different standards, the tabset and BABE optimization would have been increasing difficult and hard to maintain.
Division of Work/Separation of Duties
Digital transformation requires aligning an organization's vast technical capabilities (personal, process, and technological) across all business units. The potential is incredible.
During our transformation, we knew that we needed to optimize the way we adapted. One focus was on better communication. Understanding tax law and administration demands business expertise. How can business users be empowered to make their needed contribution?
At first we looked at industry trends to see what type of technology could help. The existing 30-year old system had major deficiencies in the way return calculation rules were maintained and changed. The rules were tightly embedded in code. This meant the changes and the testing of the changes took many months. There were so many test cases it took months to get the system ready for the next Tax year.
The technology that was determined would be most useful was a rules engine. The commercial rule engines of the day had hurdles to acceptance. They were designed to handle all types of rules. The User Interface was complex and it was bloated with features (scientific rules, etc.) that weren't needed. Rules engines had an optimization feature where they would reorder the rules for efficiency. Our major users were auditors and they wanted to see a consistent path to the answer. To avoid these issues, we wrote our own rules engine. It has two parts - rules writing and rules execution.
The rules writing engine handled the simplest of rules. The if-then-else construct was the basis. The goal was to allow an IT assigned business analyst to write the rules. When the business saw the UI, they decided that they wanted to write the rules. The division of work had changed. Business now took ownership of a major portion of the system.
The separation of rules writing allowed us to change the rule execution engine multiple times. First we generated mainframe rules, then java rules. At any point in time we can generate rules for the most modern of execution engines. The rules can be written once and support any number of execution engines. The goal is to make the business rules writers as successful as possible.
Rules were implemented in a way that created application domain services (EA). Rules were build specifically again the filing object. As we moved other domains to objects, the goal would be to allow the business to create their own business services (again AI could assist). This way nearly all programming can be done by business experts.
The development team became focused on empowerment. Tabsets, BABE, and BPM were all conceptualized for business expert usage.
The division of work/separation of duties is based on finding the experts and empowering them. IT should be charged with building new capabilities. These capabilities can cross business units and organizations. These capabilities can be implemented in a Center of Excellence/Communities of Practice paradigm. IT brings in the technologies and establishes standard approaches for collaboration and integration. They become the initial Center of Excellence. They then work with a number of business units (Communities of Practice) to help them leverage desired capabilities.
This is an AI roadmap. IT analyzes the products and figures out the integration points (from ingestion to deployment - either a tool (augmentation service) or a direct service)). The business unit assets created can easily be tested and migrated to production.
The same approach was used when NYS established its file sharing and API gateway. NYS IT became the COE and agencies/partners became the COP. COPs define the needs for product enhancement which are implemented by the COE.
The more the business can configure, the more the organization can use essential IT resources on innovation while simultaneously reducing maintenance and support costs.
Unreasonable Man
As e-MPIRE was succeeding, management decided to take on the next major inhibitor to digitization - external customers. Our previous attempts were not very successful. The universe we were handling was not very uniform. Individuals could self register, but their account could actually have been created by a representative (accountant). The accountant did this because that is the only way they could access the client's account. Businesses were either part of a pilot with the department or had to call to register. Businesses could only register through the department. Accounting firms had to go through their clients for access. These accesses could not be distinguished from employee accesses. Every new application had to be evaluated as to how it would integrate with current systems, and not overwhelm support staff (during signon and registration).
The project was named SWAN and it wasn't an acronym. SWAN was meant to replace the ugly duckling systems of the past. As we gathered requirements and evaluated best practices it became evident that this was going to be a difficult project. One fixed requirement was that our customer repository would be the same directory as e-MPIRE. External credentialing was our strategic direction. There seemed to be no good answer. That is until we inadvertently saw an eCommerce presentation.
The product incorporated many of our desired capabilities. It was highly configurable and had many CRM features our business wanted. The issue was that the government doesn't sell products, or does it. What if each application was a product? The purchase contract processes became the vetting processes for the applications. The result of the "purchase" was to put metadata in (taxpayer id) into the order (credential store). The user (which was in the directory) owned the order that contained credentials to launch applications. Web apps were written to accept credentials. If you didn't have the right signature, vetting would be triggered for the additional metadata. This was essentially step of authorization. The process included out of band verification of authorization. Step up authorization can be used for cross organizational applications where a partner's application metadata can be acquired through their vetting process. (In NYS selling a vehicle may involve Tax and DMV, in this scenario vetting could cross both).
This worked for business users as well. The only difference was that the creator of the account became the Master Business Administrator. Once they completed the complex vetting of a business (verifying data off of multiple types of returns - corporation, withholding, sales), an administrator could create other users who could peform actions for the business. Applications took the same credential, but the user was different.
Tax professionals and accountants were registered using their filing credentials. A taxpayer (business or individual) could then delegate their accesses to their representative (same credential (order) . Cross account delegation is a very common pattern - Doctor/Patient, etc. In eCommerce parlance these partners are treated like subcontractors.
Using the commerce product with a simplified page launching pattern (credential), allowed us to deliver this solution in 8 months. Within a 18-months there were 48 integrated applications and a million registered users. The Ui landing page by user types had to change. There were some accountants managing more than 1,000 clients (search of client had to be integrated). The application list ran off the page, so product (application) bundling was used.
The CRM features allowed for the business to package the applications, do outreach based on usage, and leverage other marketing features (hot spots, etc.), The eCommerce solution (with application standardization) allowed us to track usage and meet demand.
SWAN became the model for ATLaaS.
If we had just approached SWAN as another application it would have taken considerable customization. The CRM features would never have been implemented. It came up faster and cheaper and with more functionality. It followed the true underlying principle that government constituents are customers.
There is nothing more unreasonable than approaching a problem in a new way,