Welcome to Telelogic Product Support
  Home Downloads Knowledgebase Case Tracking Licensing Help Telelogic Passport
Telelogic Rhapsody (steve huntington)
Decrease font size
Increase font size
Topic Title: UML Model Metrics
Topic Summary:
Created On: 8-Sep-2004 18:50
Status: Read Only
Linear : Threading : Single : Branch
Search Topic Search Topic
Topic Tools Topic Tools
Subscribe to this topic Subscribe to this topic
E-mail this topic to someone. E-mail this topic
Bookmark this topic Bookmark this topic
View similar topics View similar topics
View topic in raw text format. Print this topic.
 8-Sep-2004 18:50
User is offline View Users Profile Print this message


Christopher Carson

Posts: 269
Joined: 30-Jun-2004

Anyone have ideas on this? What metrics are useful? Can these metrics be correlated to errors (i.e. the more complex the model, the more errors)? Are there any tools or VBA scripts that support gathering of metrics?

Thanks in advance for you help.

Christopher Carson
Report this to a Moderator Report this to a Moderator
 8-Sep-2004 18:51
User is offline View Users Profile Print this message


Paul Urban

Posts: 220
Joined: 30-Jun-2004

From the Client Center, there is a VBA script that sounds like it does some metrics as far as class and package count goes. Erin had also done that attached macro.

Rhapsody Wizard
12/28/2000
When the "RhapsodyWizard" macro is executed, a Rhapsody user can do the following:
1. Setup a project for ROPES, Builds, Subsystems, Domains, System packages,

2. Add different types of classes to a package ex: a singleton

3. Format use cases ( Uses a derivative of one of Alistair Cockburn template )

4. Count the number of classes and packages

5. Add standard operation properties for event serialise/unserialise and class copy constructor
Report this to a Moderator Report this to a Moderator
 26-Aug-2005 19:58
User is offline View Users Profile Print this message


Richard Amenta

Posts: 5
Joined: 23-May-2005

A really good metric tool would be very useful. I should be tracking my project with something. Besides, I've got to keep the process police happy with something other than useless lines-of-code counts.
What I really want is to be able to say to the nay-sayers that this process is actually better than our non-UML, non-auto-codegen processes. I'd think I-Logix would be first in line to provide a useful tool to let us grunts show management that this is a worth while endeavor.
Report this to a Moderator Report this to a Moderator
 28-Aug-2005 17:06
User is offline View Users Profile Print this message


Jesper Gissel

Posts: 88
Joined: 20-Jul-2005

I agree with Richard.

A tool of some kind, that could tell the "Money-Men" that their money is well spent on bying Rhapsody!

-------------------------
Jesper Gissel
Johnson Controls Denmark, Marine Controls
Report this to a Moderator Report this to a Moderator
 29-Aug-2005 11:19
User is offline View Users Profile Print this message


Charlie Lane

Posts: 18
Joined: 11-May-2005

Well, I have been looking in some depth at model metrics and have written a number of macros to collect them. Currently the macros are in strict confidence to my company, but if there is significant interest I could enquire about possible release.

For size metrics, it is relatively easy to count model elements such as classes, but the issue is more about what to do with the numbers. For example, we are keen to provide evidence that model-based development with Rhapsody is much more productive than coding by hand.
For hand-coded software there are counting tools that will generate counts of the logical SLOC (for the sort of rules that differentiate logical SLOC see [url]http://en.wikipedia.org/wiki/Source_lines_of_code[/url] and its link to SEI). When comparing counts from a Rhapsody model, the issue is to identify a valid (justifiable) comparision technique. Just running a SLOC counter on the Rhapsody-generated code is liable to dispute, because we know that Rhapsody generates lines that would not be in a hand-coded equivalent.
So somehow we would like to relate counts on model elements to what the equivalent in hand-coded code might be.
One approach to this is to consider the designer's tasks in creating the model elements. For example, to create an attribute in Rhapsody involves a) Create attribute name; b) Write description with info such as units, range, etc.; c) Define initial value. So how does this compare with the equivalent in hand-written code: a) Define attribute in .h file; b) Write comments about the attribute; c) Define initial value in class constructor(s).
I find that if I produce a size metric from a model along these lines (counting classes, methods etc) I get a value that is small compared with the generated SLOC (typically less than half). I can also create a requirements-level size metric (counting use cases, sequence diagrams, etc), but this is harder to compare with the generated SLOC because such model elements are not used directly to generate code.
So the question is whether the metric is correctly small, or whether it should be multiplied by a factor to make it comparable with the generated SLOC. And, if I use a factor, have I lost any justifiable means of comparison with hand-coded software? Will the sceptics accuse me of cheating?

Another type of metric is complexity, as the original poster suggested. I count this separately from size on the basis that a more complex implementation of the same thing is more error-prone. So I would give a higher complexity rating to a class with 20 methods than for two classes of 10 methods each. Bruce Douglass has proposed a means of doing a complexity measure something like cyclomatic complexity, for state charts, but from a posting elsewhere on this community there is no VB code to do this yet. We do not yet have sufficient evidence to indicate any relationship between model complexity and error rate, though you might expect some sort of proportionality. I believe that Rhapsody has been developed in Rhapsody for some time now, and it would be interesting to know what sort of error rate vs model complexity is being achieved in I-Logix, if they feel they could release such information.

I would be interested in discussing some of the issues in metrics here, e.g. what to count, comparison with SLOC, consensus on valid metrics, how to use in estimation, etc.
Report this to a Moderator Report this to a Moderator
Statistics
20925 users are registered to the Telelogic Rhapsody forum.
There are currently 1 users logged in.
You have posted 0 messages to this forum. 0 overall.

FuseTalk Standard Edition v3.2 - © 1999-2009 FuseTalk Inc. All rights reserved.