Moving Towards An Agile Approach
When a traditional SDLC approach does not help in reducing Application Lifecycle Management (ALM) costs and risks IT leaders are forced to look to alternative approaches. One of the approaches that has become popular is the Agile approach. There is a lot of literature available on different agile approaches. As such I will not dwell into them in depth. This article will primarily concentrate on an approach that I have used with good success in transitioning traditional SDLC based ITorganizations into more agile approaches.
The diagram above shows an approach that, in my opinion, incorporates the best of both worlds. It allows traditional firms to retain control in the form of traditional project management tools and processes while incorporating the iterative aspects of agile approaches. This middle ground allow a more controlled transition which helps ameliorate the troubles organizations face in making the transition. These difficulties include:
- Lack of adequately trained personnel or training budgets to bring all team members up to speed quickly
- Existing infrastructure, templates, software and tools that support the traditional approach
- Lack of understanding from upper management and therefore an increased perception of risk.
- Inability to allow elements of enterprise control in the form of Change Control and Enterprise architectures/policies to be changed without excessive audit overhead.
The reader should note that the business owner is part of the agileteam and that the time needed to get approval from a Change Control Board is considered to be part of the agile time box.
To be coninued
Business Process Analysis
Often times a Software architect is called to fill in a role that does not fit his strengths. This could be driven by a number of reasons. I list a few on account of which I have ended up having to sharpen my business process analysis skills. I am sure you can add to this list.
- The business analyst walked out in a huff/to new pastures at a critical time and someone had to fill in the spot.
- You wanted to learn and experience something new and your supervisor knew of a vacant position that could use a "smart guy".
- There was no one else around who had a clue as to what business process meant and your little knowledge resulted in you being crowned as the liege of that domain.
However, with time I have actually begun to like the business analysis part of a project as much as any other activity I perform. In the role of a business analyst, a software architect is able to obtain a coherent and extensive view of the clients needs directly. This single step in my opinion reduces a lot of issues by eliminating, what I like to call, communication "hear-tells". My increased capability in this arena has been helped along by authors, mentors and kind souls to who I owe so much. Below is a synopsis of what I have learned.
The Steps
The first step in any business process analysis is to map out the flow. This step comes intuitively to a information technology professional as it closely resembles flow charts used in the software world. In this step, each activity and decision point is mapped out as a unit of work performed by a business. This step gives you an overall idea of how the business works. It usually involves facilitated working sessions in which the business process experts contribute knowledge and experience to help in mapping out the business. Unfortunately, a lot of business process work stops at this stage itself. I have had clients who felt that mapping out the process was all they really needed from a process mapping effort and that time would be better spent in gathering requirements to automate a lot of the activities performed. This can be a short-sighted approach because there is a hidden assumption in it. The assumption is that the business process as it exists does not warrant any improvement.
A more complete approach, in my opinion, would be to go to the next step and perform a business process analysis. In this step, the analyst brings to bear powerful tools that identify weaknesses in the process and narrow down the root cause of those problems. There are many techniques that can be used to perform business process analysis. Two of the techniques that I have grown to prefer over others are:
- Value-added analysis and
- Constraints analysis
Value added analysis
In this technique, the analyst goes through each of the process flows mapped out in step one and identifies which of the activities in that flow add value to the business as a whole and which do not. As a refinement to this technique, it helps to increase the granularity by which a process activity is graded. For example, an analyst may use five grades of very high, high, medium, low and redundant to discriminate between activities that provide varying levels of utility to the business. I find color coding to be very useful at this stage. It helps visualize the areas where I need to focus on.
Constraints Analysis
In this technique we first identify where all the pain points in the business process are. Next we look to identify where work in progress piles up. Each of these points are indicators of constraints that the process is facing. These constraints are commonly referred to as bottlenecks and is a term that seems to be more easily understood. Having identified the bottlenecks, an analyst seeks to ensure that work units that are processed by this activity maximized so that the rate of units being processed (also called throughput) is maximized. There are a number of options available to the experienced analyst. These include reducing the cycle time of that activity as well as consolidation.
Besides these sophisticated techniques, however, I have found that the greatest value I can bring to the client in the role of a business analyst is to leave my thinking hat on and to use common sense with a fresh perspective. Some the questions I ask that keep me moving forward are:
- What will happen if this step goes away?
- Is this the right place to do this work?
- How does the business benefit from it?
- Can this be done elsewhere?
- Is something missing (typically information) that should be included?
- What is the history behind this?
Conclusion
While the list of questions is extensive and never ending, the goal of the activity should never be. The strength of a software architect in understanding the detail can get him enmeshed in an analysis-paralysis mode. The key symptom of this stage occurs when the goal of the quest turns into understanding all the detail rather than figuring out a solution. This can be a difficult slope to manage and sometimes is the key that differentiates a successful effort from a failed one.
Project estimates - understanding the developers reticence
Perhaps there is nothing more frustrating for a developer than to be asked to implement functionality that was
architected in a context that he or she has very little knowledge about. What compounds the frustration is the fact that technology evolves rapidly so that knowledge and skills gained on an older version of the application are no longer relevant or meaningful in a newer version. When the rubber hits the road and it is time to implement,
timelines on a project plan that were estimates suddenly become deadlines. Quite often the developer would have furnished those estimates based on experience with the older version.
There are good reasons for architects and infrastructure personnel to acquire the latest version of a software despite the angst of their developers. These include, maximizing features available in the purchased version, improved licensing agreements, better price performance ratios and reduction in support overhead. This situation was once again played out at a recent client where the specific technology involved was Microsoft's
SQL Server database. In this article I seek to traverse the
roadmap of the
SQL server product and Microsoft's database interfaces to elucidate the complexities that a Software Architect must keep in mind while setting expectations amongst his/her stakeholders.
A Brief History of SQL ServerThe first
SQL Server version 1.0 was introduced to the market via an alliance between
Ashton-Tate,
Sybase and Microsoft. At its core it was built on
Sybase relational technologies. At that time the only way a developer could interface with the database was through a library called
DBLib. This library utilized a protocol called the Tabular Data Stream to communicate with the database. Like many of the application of its time it suffered from poor error-checking and unexplained crashes.
To further its market penetration, Microsoft decided to tightly couple
SQL Server with its NT operating systems. To facilitate this, it began to rewrite the database engine core. By 1995, the
SQL Sever version 6.0 had been built and the partnership with
Sybase had come to an end. Subsequently, Microsoft went on to release additional versions. Version 6.0 centralized the administrative
capabilities of
SQL Server and therefore found new converts amongst the support and administrative staff within a lot of its customers technology staff. By its version 7.0 however, it had introduced its own architectural framework and eliminated all dependency on its initial database engine core. The 2000 version came with additional features which included data warehousing and data transformation services (
DTS). This feature enabled
SQL Server to support large scale data movement which its
BulkCopy feature attempted to support. The latest version adds a reporting service and an Integration service that eliminates
the need for third party reporting tools and improves performance respectively.
A Brief History of SQL Server ConnectivityTo improve on the value delivered by its front-end technologies, a number of more mature interfaces technologies were built. These included the well-known Object Database Connectivity (
ODBC), the Joint Engine Technology (JET) and the Object Linking and Embedding (OLE). These interfaces allowed the developer to connect to multiple databases. The
ODBC interface provided a object layer that connected to various native
API within each of the databases. This interface received a favorable reception in the developer community and is still widely used today as an integral part of the Windows operating system. The performance of these interfaces was poor and therefore were never able to gain
entry into the enterprise market. Several additional interfaces were built including
DAO, COM and ADO. Each of these were also suffered from increasingly complex layered designs and hierarchies that impacted performance and hindered comprehension of the technology.
By the mid-to-late nineties, it became obvious that a whole new approach was needed in connecting to the database engine. The spread of the Internet, advent and adoption of XML as the descriptive language of choice and the need to create the next generation of COM resulted in Microsoft creating a new database connection layer. This was included as part of its initiative to create the .NET environment. It eliminated the complexity in favor of speed and obscure component object models for additional programming by the developer. This interface was named ADO.NET. While the nomenclature was chosen to facilitate a continuity of branding it confused developers as much as it helped.
The Consequences of Change
Each of these evolutions improved upon the performance as well as eliminated shortcomings in prior products.
Accordingly, they were aggressively marketed and acquired by the vendor and its customers. However, developers were faced with the task of learning a new feature and delivering to estimates they had made using their experience on prior projects. Inevitably, most products go through major overhauls at some point in their life cycle. At each of these stages, the underlying architecture, user interface and semantics change markedly. This results in most development effort needing at least some research. An architect must take into consideration these overhauls and communicate
the implications to the project manager who might be unaware of a magnitude of risk a project faces on account of the use of a newer version.
Selecting right! - Part One
Its is often said that business is all about people. That statement is true more often than not. People form a critical part of the value chain that is an organizations lifeblood. Yet too often it is given the least importance. Poor processes and systems can perform better when staffed with competent and knowledgeable workers. Yet it is often this area that gets the least attention.
In the information technology arena the malady displays the same basic symptoms. These are high turnover, high absenteeism and work related injuries, under performance and a penchant for gaming 'the system'. When one considers the time, effort and resources spent by an organization in recruiting, training and compensating an employee, one realizes quickly that a software architect and technical project manager cannot relegate the hiring process to the human resources department alone. Indeed even business managers more often than not lack the knowledge to effectively evaluate employees via the blunt tool of an interview.
In the next three essays I will attempt to enumerate what I believe to be the essentials for selecting right.
1 - Understand the role
Perhaps the most important ingredient to success in any endeavor is preparation. Towards this end before one even looks at the resume of a potential hire one should read and re-read the job description. This helps you get a good grasp of what the job entails. Next understand the role. Some of the question that will help you understand the role include:
What will that employee being doing on a day to day basis?
What are the characteristics of successful individuals in that role?
What are the demands and rewards it offers?
The answers to these questions will act as a reality check. They should be used to vet the job description you have been furnished. A manager I knew said "I do not have the time to research what goes on in her department, I am too busy with my own. She should just give me the A members on her team". Essentially this person is saying that he prefers to spend more time training, being frustrated with, firing and then asking for a new person on the team. Indeed selecting right is not just about doing yourself a favor it is about doing the potential recruit a favor. An employee invests as much time into a firm as then firm does in him/her.
2 - Define the goals
When recruiting for a position an effective manager must ensure that the goals that the individual must meet are clearly understood right at the outset. This is where a lot of body shop and recruiting firms fail. A lot of these smaller firms are very resume focused. Accordingly, they strive to present resumes that are an "exact match". Some of these attempts lead to hilarious results while others can border on being fraudulent. Indeed an individual with a resume that exactly matches the job description should be regarded with a grain of salt. More often than not these resumes are the byproduct of a "keyword search" driven business environment rather than a diligent and extensive search.
To clarify my point let me furnish an example. If one of the goals of the firm is to have individuals who are able to adapt to and learn newer technologies then limiting the recruiting effort to individuals who have focused and deep knowledge in a single technology and skill will be counter productive. In fact I know of more than one case in which temporary and full time staff were more productive with a broad body of knowledge and skills rather than a few.
3 - Set expectations
I find this phase to be key in the preparatory stage of the interviewing process. While pre-set expectations can lead to a bias they are a double edged sword. A savvy architect must be willing to go in with a minimal set of expectations because of the benefit that comes with them. When one goes into the interview process with a clear idea of what you want from a potential recruit you will find the courage to reject an entire set of candidates if they do not meet your needs. This can be a very tough thing to do. The pressure from the human resources department can be intense. Indeed the pressure from stakeholders can be worse. Your project's stakeholders will want to see results and nothing worries them more, at times, than seeing empty chairs.
In fact, as a client, you should be especially wary of a few ploys your consulting firm will use to induce an acceptance from you. For example, you may be presented with an initial set of sub-standard consultants followed by a decent one. In such an event, the relative qualifications of the last candidate may tempt you to ease the pain of having to reject yet another candidate. Indeed some human resource managers will screen you out from some selection steps if they believe that you are a "high bar" or, less euphemistically, "difficult" interviewer.
to be continued
Automation!! Really????
With the incessant drive to demonstrate value and a tangible return on investment, decision makers are increasingly being presented with business cases that purport to deliver better value by automation. There are a number of factors that drive this thought process. Most of these are well meaning and lucidly articulated. However, it behooves an executive to look at the argument being presented in a little greater detail.
Some of the factors that are presented to a decision maker take the form of broad sweeping statements. For example, I recently heard a consultant say, "It is a best practice to automate the termination of accounts." While this statement may in itself not be erroneous, the automation argument has buried within it an inherent weakness. That weakness is like most things a part of its strength.
Automation results in quantifiable cost savings in terms of head count and its associated overhead. Accordingly, it is very easy for a consultant to put together and make the case for automation. The investment is well known and measurable, the benefits can be calculated on documented and referenced assumptions and the case itself plays to a need that is tangible to every manager. Furthermore, automation provides an additional benefit of control. Unlike humans, a computer program or machine will deliver predictable, consistent and reliable service with little maintenance.
However, automation suffers from two major weaknesses. The first is that any automation effort requires an initial upfront fixed cost. Typically this cost is significant. Secondly, automation reduces flexibility. While human resources can be retrained and redeployed, it requires a significantly greater effort to reprogram, retool or recalibrate an automated solution. What compounds the difficulty of the situation is that technology training and integration costs cause the implementer to be locked in to a vendor for the long term. This in turn increases teh costs of further enhancements. Of course, it can be argued that retraining a person can be just as daunting a task. This is a valid point but it does not detract from the essence of the argument. Indeed, automated solutions also require training and education as they modify existing processes, introduce newer interfaces and render data in new forms.
Furthermore, technology suffers from increasingly shortened lifecycles. Thus the benefits that are expected to accrue over a longer duration of time do not materialize. Typically this is on account of the fact that automated solutions typically bring about reductions in incremental cost over a period of time. The shorter the period the less the benefit gained.
I believe, that executives must carefully weigh three key factors within their organizational context:
- The rate of change of the legal framework within which their processes exist and the ease with which the context within which their processes exist can be codified into rules. This will be a key determinant of the flexibility and longevity of any automated solution.
- The marginal cost of redeploying or enhancing the productivity of existing resources via training and motivation. This will provide a good metric with which to weigh the alternatives.
- Any existing factors, such as integration, sunset, data migration or other operational factors that make automation risky.
Indeed the above factors may, when analyzed thoroughly, prod you to respond to a solution that is emblazoned "Automation!!" with a skeptical "Really???". Any healthy dose of skepticism will ensure that the emperor continues to wear his clothes.
On Optimizing Software Code
Probably the most educating aspect of being a software architect is optimizing code that has already been built. It helps one improve ones knowledge of programming and processes. Over the years, I have to come to realize that there are a few notions that can crystallized into general rules of thumb. I enumerate them below.
1 - Delay optimizations as late in the process as possibleOptimizing code prematurely can result in sub-optimal choices. By delaying optimizations as late in the process, one gets the possibility to make the most effective choices. This is because typically more information is available to a developer later in the development, deployment or support phase than earlier. Consequently, by delaying optimization choices as late as possible one gets to bring to bear this additional information on the optimization task at hand
2 - Build for correctness firstOptimizing code that is incorrect often results in rework. An analogy to understand this is to know that the path you are taking is the right one before driving faster. Failure to do this may result in your reaching the wrong destination very quickly.
3 - Understand what you are optimizing forThis step is something most developers need to keep in mind at all times during the optimization process. Additional knowledge of the system can actually be debilitating if one does not have a good understand of what the real need is. Too often developers focus on the raw performance of the system. This is important and quite often the primary driver of an optimization effort. However, ignoring other dimensions of the optimization needs can result is inferior outcomes.
Some of these dimensions include:
Performance - This is a measure of the raw speed of the application in completing its tasks. Raw performance by itself may however, by itself, not fully satisfy the requirements set before the designer.
Availability - This involves optimizing the time an application is up and running over a period of time. Increasing the availability of an application can result in poor performance or a decrease in flexibility based upon the design and implementation choices made.
Reliability - This involves increasing the ability of the application to provide correct and error free results. A number of hidden defects become apparent only after an application is used multiple number of times. This is especially so for applications that propagate their state over an extended period of time. Increasing reliability can result In a decrease in flexibility and performance.
Flexibility - Allows for an application to respond to changing circumstances and inputs. For example an application that is able to tailor its user interface to its user's preference is more flexible. Flexibility inherently introduces a larger number of moving parts within an application. Consequently, the reliability, performance, availability and space needs can all suffer.
Space (memory consumption, bandwidth consumption) - While both memory and bandwidth considerations play an important role in determining the amount of resources an application will need they can have significant secondary effects. These range from the application being usable on a range of devices or over a section of an enterprise. Additional performance may be obtained by moving server side processing to your local memory but like everything else it suffers from the law of diminishing marginal returns.
3 - Start with a global perspectiveThis step is harder than it may seem. It is hard precisely because we don't always know what the future needs will be. A structured creative approach will help harness previous experiences and current thinking. Towards this end the optimization expert must consider architectural choices as well as business choices.
Consider remote server processing as an option to in-memory processing
4 - Profile, profile, profileProfiling refers to measuring the characteristics of your application. As part of this you might:
Profile for processing time
Profile for memory consumption
Profile for repeated activities
Investing the time in profiling will help provide additional information on the detailed working of the application without getting into the weeds. Any time spent here will help create better heuristics and better optimizations.
5 - Focus on local optimizations last
The strength of being hands-on and a code guru comes with its disadvantages. One of which is the proclivity to delve as deeply as possible very quickly. This results in a programmer using the time and resources in an inefficient manner. There are a few maxims that have been borne out in the field of programming. These are:
Efficient code does not always result in efficient applications
Simplicity should override elegance and brevity
Indeed if anything, a programmer should resist the temptation to show how good a program he/she can write.
6 - The competing needs of space and time
Due to a finite amount of memory and hard disk space (usually referred to as space) on a computer every program that deals with large volumes of data or processing will have to operate within the confines of those limits. When the program comes up against any one of these limits, it will have to perform additional processing to accomplish its task within the limited amount of space available to it.
(c) 2006 Vivek Pinto For more details please visit us at Wonomi Technologies
Orphaned Systems
In most large enterprises, as in societies at large, one will find on occasion a system that seems to have to owners. It exists by itself, at times, in its own little world. The users of this system are few and far between. Yet an assessment of the system indicates that all is well with it. Indeed you might open the door to its room and find neat stacks of voluminous documentation sitting next to an array of playfully sparkling LEDs. Even more intriguing is the fact that its users seem perfectly happy with it save a few minor complaints.
"Its printer sometimes does not work, but if I reboot it all works well again" says one. "I remember it taking a long time to print the cash ledger report", says another. Walking around to a graying technician who seems to find enormous gratification in greasing a flange, you might even hear the systems pedigree. The technician might furnish you with its history, in a voice that conveys the pleasure of one willing to impart information to any willing to listen. "The system is more than seven years old", he will probably tell you. "When it was first installed I remember they had a small party at the shop, in bay three. They don't have those anymore ... too much cost cutting." He then mentions the name of a fancy consulting firm and a equally well known vendor that were responsible for installing the system. "They charged more than a million bucks to put it in" he said. "Indeed everyone at the shop used to use it. The administrator used to be a Russian chap...very smart." He then shrugged.
Your architect hat goes on, and every circuit in your consulting brain lights up. History, you remind yourself, provides a chronological context of a system. Indeed it can be as valuable as your current state assessment. You warm up to an enlightening conversation and poke and prod with questions that are open ended enough to elucidate a response from the gentleman. He looks at you and cocks his head in askance. He inquires of your origins and without waiting for an answer, as if to disabuse you of any notion that his query implied negative connotations, he continues, "Smart guys! Are you going to fix the system?"
You hastily correct his incorrect assumption, not wanting to raise any expectations. "No", you say, "I am here to understand what it does. Do you know if there is anything wrong with it?"
He shrugged. "No one uses it nowadays", he said tentatively. "Till about four years ago, there used to be a lot of people running around that room. Now hardly anyone seems to use it. At the end of the quarter and year would be the time when it would be the worst."
"Who repairs the system?" you ask eagerly. He pauses, thinks and then says "I do not know. I don't think it has needed any repairs". His demeanor becomes more distant as he reaches the limit of his knowledge.
You then head over to the head of the department who has a large sign above his door saying Manager. The sign laid greater emphasis on the title than the name of the person and sought to lend gravity to it in this manner. You glance at your watch, it is time for your meeting. A brief sharp knock followed by the usual formalities sees you seated in front of a beaming middle-aged man multi-tasking between his phone, brunch and email. The sandwich disappears rapidly and having dispatched the email and hung up the phone you get his undivided attention.
Half an hour later you leave with the knowledge that the system still works, the administrator left a few years ago and that due to budget constraints no system maintenance or upgrades were performed for the last three years. So then why does not one use the system, you wonder.
(c) 2006 Vivek Pinto For more details please visit us at
Wonomi Technologies