Saturday, April 01, 2006

On Optimizing Software Code

Probably the most educating aspect of being a software architect is optimizing code that has already been built. It helps one improve ones knowledge of programming and processes. Over the years, I have to come to realize that there are a few notions that can crystallized into general rules of thumb. I enumerate them below.

1 - Delay optimizations as late in the process as possible
Optimizing code prematurely can result in sub-optimal choices. By delaying optimizations as late in the process, one gets the possibility to make the most effective choices. This is because typically more information is available to a developer later in the development, deployment or support phase than earlier. Consequently, by delaying optimization choices as late as possible one gets to bring to bear this additional information on the optimization task at hand

2 - Build for correctness first
Optimizing code that is incorrect often results in rework. An analogy to understand this is to know that the path you are taking is the right one before driving faster. Failure to do this may result in your reaching the wrong destination very quickly.

3 - Understand what you are optimizing for
This step is something most developers need to keep in mind at all times during the optimization process. Additional knowledge of the system can actually be debilitating if one does not have a good understand of what the real need is. Too often developers focus on the raw performance of the system. This is important and quite often the primary driver of an optimization effort. However, ignoring other dimensions of the optimization needs can result is inferior outcomes.

Some of these dimensions include:
Performance - This is a measure of the raw speed of the application in completing its tasks. Raw performance by itself may however, by itself, not fully satisfy the requirements set before the designer.

Availability - This involves optimizing the time an application is up and running over a period of time. Increasing the availability of an application can result in poor performance or a decrease in flexibility based upon the design and implementation choices made.

Reliability - This involves increasing the ability of the application to provide correct and error free results. A number of hidden defects become apparent only after an application is used multiple number of times. This is especially so for applications that propagate their state over an extended period of time. Increasing reliability can result In a decrease in flexibility and performance.

Flexibility - Allows for an application to respond to changing circumstances and inputs. For example an application that is able to tailor its user interface to its user's preference is more flexible. Flexibility inherently introduces a larger number of moving parts within an application. Consequently, the reliability, performance, availability and space needs can all suffer.

Space (memory consumption, bandwidth consumption) - While both memory and bandwidth considerations play an important role in determining the amount of resources an application will need they can have significant secondary effects. These range from the application being usable on a range of devices or over a section of an enterprise. Additional performance may be obtained by moving server side processing to your local memory but like everything else it suffers from the law of diminishing marginal returns.

3 - Start with a global perspective
This step is harder than it may seem. It is hard precisely because we don't always know what the future needs will be. A structured creative approach will help harness previous experiences and current thinking. Towards this end the optimization expert must consider architectural choices as well as business choices.
Consider remote server processing as an option to in-memory processing

4 - Profile, profile, profile
Profiling refers to measuring the characteristics of your application. As part of this you might:

Profile for processing time
Profile for memory consumption
Profile for repeated activities

Investing the time in profiling will help provide additional information on the detailed working of the application without getting into the weeds. Any time spent here will help create better heuristics and better optimizations.

5 - Focus on local optimizations last
The strength of being hands-on and a code guru comes with its disadvantages. One of which is the proclivity to delve as deeply as possible very quickly. This results in a programmer using the time and resources in an inefficient manner. There are a few maxims that have been borne out in the field of programming. These are:

Efficient code does not always result in efficient applications
Simplicity should override elegance and brevity

Indeed if anything, a programmer should resist the temptation to show how good a program he/she can write.

6 - The competing needs of space and time
Due to a finite amount of memory and hard disk space (usually referred to as space) on a computer every program that deals with large volumes of data or processing will have to operate within the confines of those limits. When the program comes up against any one of these limits, it will have to perform additional processing to accomplish its task within the limited amount of space available to it.

(c) 2006 Vivek Pinto For more details please visit us at Wonomi Technologies

0 Comments:

Post a Comment

<< Home