|Where's my ROI? - The financial benefits of using JHawk|
|Product Home||Licensing & Pricing||Metrics Guide||Download Free Trial||FAQ, News, Bugs||Other Products|
Any development manager who has managed the delivery of a software system will recognise the technical and productivity benefits of using JHawk. However,in any organisation it often falls to that same development manager to make the business case for purchasing and using JHawk. This section is designed to help you make that case.
Static analysis - approach, costs and benefits
The primary benefit of using a tool in the assessment of the quality of your code is that it provides an objective analysis. Just looking at code is purely subjective. Looking at any code (yours or someone else's)will be prone to bias (conciously or unconciously). By using an objective tool you are making one of the most important steps in any quality control system - 'Management by fact'.
Another approach to analysing code quality is to use visual inspection (Code Reviews). This is time consuming and typically involves the use of the projects most valued (and expensive) resources - the most experienced programmers.
Capers Jones has published a series of annual studies analysing the effectiveness of various code quality approaches. His studies are useful for a number of reasons - they are written by a person whose line of business is helping companies improve the quality of their code, they cover a very wide range of products,they cover a long period of time and , most importantly, they attempt to quantify the costs and benefits of different approaches. In the most recent study Capers Jones' analysis of 4 different test cases using the combination of three quality approaches (Static analysis (S), Code Inspection (I) and manual testing (T)) produced the following results -
The two lowest overall testing costs are for the cases (C1 and C3) where Static Analyisis was involved in the first case approximately $108000 was saved (compared to the most expensive test - C4) and in the second approximately $100000 was saved. In each case $12772 was the cost of the static analysis. The cost of the analysis was covered 8 times by the savings.
Where we can do direct analysis (ie where the only difference is the use of static analysis) the total cost savings are $33000 (C1 vs C2) and $151000 (C3 vs C4). In the case of C3 vs C2 we see that Inspection is more effective than Static Analysis - reducing the total cost by $11000.
In the Capers Jones study (http://sqgne.org/presentations/2012-13/Jones-Sep-2012.pdf (page 7)) he mentions a number of actions that development managers could take that he suggested had a greater than 90% chance of success in reducing delivered defects - the first three of these are -
Capers Jones' analysis also shows significant reductions in defects at delivery at higher SEI CMMI Levels(http://sqgne.org/presentations/2012-13/Jones-Sep-2012.pdf (page 12)). Two important criteria are introduced at CMMI Level 2 - MA (Measurement and Analysis - http://www.software-quality-assurance.org/cmmi-measurement-and-analysis.html) and SAM (Supplier Agreed Management http://www.software-quality-assurance.org/cmmi-supplier-agreement-management.html). JHawk can play a critical role in both of these criteria by providing an objective quality meaure that can be used to assess internally produced and externally supplied (e.g. outsourced) code. The ability to tailor JHawk to a managed quality program means that it can grow with your process definition as you move on to CMMI Level 3 which requires finer levels of control and definition of processes.
A report from 2004 (The Business case for Software Quality - Bender RPT, Inc) lists a number of observations on the attitudes of companies to testing. Even although this report is from 2004 my experience and what I read of others experience suggests that little has changed.
JHawk - not just static analysis
JHawk contributes more than just static analysis to the code quality regime. By using the range of metrics produced users can pinpoint code that is more likely to have errors. This can allow code reviews to be 'directed' making sure that susceptible code is inspected throroughly. Reviewers who have to analyse code in a given time period may well end up rushing their reviews as the deadline approaches - JHawk can push susceptible code up to the front of the review process.
A number of the metrics produced by JHawk (particularly those at class and package level) relate directly to potential design defects. Detecting these early is critical as design defects are more difficult to fix and have wider defect implementations due to the amount of code that they can affect.
The professional versions of JHawk provide the ability to create your own metrics. This can allow the creation of metrics more finely tuned to the type of application being analysed. These metrics do not need to be related to code artefacts they can use data from defect databases, code repositories, anything that you can access via Java code. This allows you to create an integrated view of those factors that may affect the quality of the software that you produce.
JHawk also provides many benefits that are a reassurance of the quality of our product and that will contribute to reducing the Total Cost of using this product.
We hope that this information is useful to you. Please feel free to contact us at email@example.com if you have any questions.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.