Appendix T - Advanced Modeling Functions

The following summarizes two sets of advanced thread-control functions. They are alternatives to CSIM's standard TRIGGER_THREAD, RECEIVE, WAIT, and DELAY functions. They offer advantages in some situations, but require extra care in using them. Both sets of advanced functions reduce the number of active threads used by your models. (Some platforms cannot support as many threads as others, and, depending on available memory, the thread count may otherwise limit the scalability of models requiring many threads. Simulations containing over 1-million box-entities have been modeled on a 128-MB PC by using these functions in otherwise normal CSIM models, with less than 10 peak active threads.)

The first set of advanced functions, which are trigger-based (TRIGGER_RECEIVE_ANY, SET_WAITING_TRIGGER, and SET_WAITING_TRIGGER_WITH_TIMEOUT), are more flexible to use than the second set, but may slow some simulations down slightly. The second set, which are call-based (CALL_THREAD, CALL_ON_RECEIVE_ALL_PORTS, CALL_ON_RECEIVE_ANY, SET_WAITING_CALL, and SET_WAITING_CALL_WITH_TIMEOUT), can greatly accelerate simulation speeds while reducing thread-counts, but are more restrictive in usage.

Background: CSIM's normal thread-control functions allow a descriptive paradigm known as procedural description. Delay, Wait, or Receive statements, also called blocking statements, may be placed within larger procedural blocks of code. They are basically treated no differently than any other statements. Multiple blocking statements may occur within deeply nested for or while loops, or conditional if blocks. This enables very natural model behavior descriptions in the form of logical proceedures, where the proceedure boundaries correspond to logical processes; not necessarily just where blocking statements occur. Few simulation tools or environments support procedural description paradigms, other than those of VHDL, Verilog, and CSIM. More typically, discrete event simulators support only a state-oriented paradigm, which is actually a subset of the procedural paradigm. For example, CSIM can be used in both procedural or state-transition paradigms. While elegant and efficient from a descriptive sense, blocking statements within procedural descriptions consume threads while waiting for an activation event or time-out.

The advanced thread-control functions enable threads to be relinquished while inactive, and to be re-established only when activated. This requires that the blocking thread: (1) replace the traditional blocking statement (DELAY, RECEIVE, WAIT) with an advanced trigger-based or call-based function, (2) name a thread to be actived on resumption, and (3) exit immediately. Any state-values must be preserved in shared-variables so that the resuming thread can pick up where the other thread left off. As you can see, this mode is ideal for state-transition based modeling. Many models can be converted to this mode by breaking threads into pieces. But this may not be very practical for deeply nested procedural threads.   Advice:   Use where appropriate or needed.

In either case, using these new routines will present some coding inconveniences.
They are advanced options;   not cure-alls.
Please heed the cautionary notice below.


1. Trigger-based Advanced Thread-Control Functions

The following functions are safe to use in any model, but may be inconvenient where deep nesting impedes breaking-up the thread. These funcitons do not block the thread who calls them. They execute in zero simulation-time, and cause a new thread to be started only when the specified condition occurs. These trigger-based functions can be used to trigger threads that contain blocking calls themselves. The only real advantage to these functions is to reduce standing thread-counts. For example, these calls increase scalability by enabling large multi-entity systems to be simulated with as few as one (1) active thread, regardless of the number of entities. However, these functions can reduce simulation speed, because they require threads to be destroyed and created, instead of just blocked and resumed. In most operating environments, thread creation/destruction consumes approximately twice as much time as merely blocking/resuming a standing thread. If most CPU time is spent within your model-code, this tiny bit of extra overhead may present very little slowdown. Otherwise, it could be more substantial.


2. Call-based Advanced Thread-Control Functions

The following functions are more restrictive than the trigger-based functions above.
          THESE CAN ONLY BE USED WHERE THE CALLED THREAD CANNOT BLOCK !!!
However, in such cases you not only save threads, you also improve run-time. Another benefit of the Call-based methods: the called thread receives virtually unlimted stack space.

These Call-based routines are similar to the trigger-based methods above, except, instead of starting the named thread-routine as a true thread, it is called as a subroutine directly from the simulator's main kernel when the activation event arises. There is no thread creation/destruction overhead. There is virtually infinite stack space available. The thread routine behaves in the normal way, like any other model thread code. It has access to the shared variables of the box instance under which it is running, like any other thread. But it cannot block (WAIT, DELAY, RECEIVE). Instead, it can call any of the trigger-based or call-based functions to accomplish the same effect. The called thread-routine must perform its actions and exit immediately. (No other threads can run until the called routine finishes.)

.

PLEASE NOTE THE LIMITATIONS HIGHLIGHTED BY "<<<=== ***" ABOVE !!!
These new routines may not be convenient nor useable in all cases! However, in situations where they apply, they can reduce threads and/or speed-up simulations. Basically, the TRIGGER_RECEIVE_ANY and SET_WAITING_TRIGGER are safer to use anywhere, except where deep nesting impedes breaking-up the thread. However, their advantage is only to reduce thread-counts, though slowing simulations. The CALL_ON_RECEIVE_ANY, CALL_THREAD, and SET_WAITING_CALL are even more restrictive, in that THEY CAN ONLY BE USED WHERE THE CALLED THREAD CANNOT BLOCK !!! However, in such cases you save threads, plus improve run-time! (Another added benefit of the CALL_xx methods: infinite stack for called routine.) In either case, using these new routines will present some coding inconveniences. They are options; not cure-alls.


Example 1:

Notice how the procedural version (Spinner1) is simpler and more intuitive to understand than the state-based version (Spinner2). ... Sometimes the inconvenience is justifiable for efficiency or scalability needs.

 /* Traditional procedural model. */
 DEFINE_DEVICE:  Spinner1
  DEFINE_THREAD: start_up
   {
    int counter=0;

    while (counter < 10)
     {
      DELAY( 10.0 + (double)counter );
      printf("%d: The time is now %g\n", counter, CSIM_TIME);
      counter = counter + 1;
     }
   }
  END_DEFINE_THREAD.
 END_DEFINE_DEVICE.



 /* Model using "zero-thread" call-based (or state-based) method. */
 DEFINE_DEVICE:  Spinner2
  int counter;	 /* Declare persistent state variable, shared between threads. */

  DEFINE_THREAD: start_up
   {
    counter = 0;  /* Initialize the state variable. */
    CALL_THREAD( state2, 10.0 + (double)counter, 0 );	/* Schedule state2 to activate in the future. */
   }
  END_DEFINE_THREAD.

  DEFINE_THREAD: state2
   {
    printf("%d: The time is now %g\n", counter, CSIM_TIME);
    counter = counter + 1;
    if (counter < 10)
      CALL_THREAD( step2, 10.0 + (double)counter, 0 );	/* Re-schedule myself to activate in the future. */
   }
  END_DEFINE_THREAD.
 END_DEFINE_DEVICE.


(Note that Spinner2 passed counter to state2 as a shared variable. Alternatively, it could have been passed as a THREAD_VAR.)


Example 2:

Again, notice how the procedural version (Relay1) is inherently simpler and more intuitive to understand than the state-based versions (Relay2 and Relay3).

 /* Traditional procedural model. */
 DEFINE_DEVICE:  Relay1
  DEFINE_THREAD: start_up
   {
    int *message, *len;
    int numports;
    char **portlist;

    portlist = list_in_ports( &numports );	/* Get the in-port names. */
    while (1)
     {
	RECEIVE( portlist, &message, &len );	/* Wait for and receive incoming messages. */
	DELAY( 2.5 );
        SEND( outport, message, len );		/* Send message out. */
     }						/* Loop back to wait for next message. */
   }
  END_DEFINE_THREAD.
 END_DEFINE_DEVICE.



 /* Model using "zero-thread" TRIGGER-based (or state-based) method. */
 DEFINE_DEVICE:  Relay2
  char **portlist;  /* Declare persistent state variable, shared between threads. */

  DEFINE_THREAD: start_up
   {
    int numports;

    portlist = list_in_ports( &numports );      /* Get the in-port names. */
    TRIGGER_RECEIVE_ANY( state2, 0, portlist ); /* Wait for an incoming message. */
   }
  END_DEFINE_THREAD.

  DEFINE_THREAD: state2
   {
    int *message, *len;

    RECEIVE( portlist, &message, &len );    	/* Receive the incoming message. */
    DELAY( 2.5 );
    SEND( outport, message, len );          	/* Send message out. */
    TRIGGER_RECEIVE_ANY( state2, 0, portlist ); /* Wait for next incoming message. */
   }
  END_DEFINE_THREAD.
 END_DEFINE_DEVICE.




 /* Model using "zero-thread" CALL-based (or state-based) methods. */
 DEFINE_DEVICE:  Relay3
  char **portlist;  /* Declare persistent state variable, shared between threads. */
  int len;

  DEFINE_THREAD: start_up
   {
    int numports;

    portlist = list_in_ports( &numports );      /* Get the in-port names. */
    CALL_ON_RECEIVE_ANY( state2, 0, portlist ); /* Wait for an incoming message. */
   }
  END_DEFINE_THREAD.

  DEFINE_THREAD: state2
   {
    int *message;

    RECEIVE( portlist, &message, &len );    	/* Receive the incoming message. */
    CALL_THREAD( state3, 2.5, message );	/* Delay state3 for 2.5 units. */
   }
  END_DEFINE_THREAD.

  DEFINE_THREAD: state3
   {
    int *message, *len;

    message = (int *)THREAD_VAR;
    SEND( outport, message, len );          	/* Send message out. */
    CALL_ON_RECEIVE_ANY( state2, 0, portlist ); /* Wait for next incoming message. */
   }
  END_DEFINE_THREAD.
 END_DEFINE_DEVICE.



(Note how Relay2 and Relay3 models are similar except Relay3 needed an extra state (state3) because call-based threads cannot have internal delays. The delay was accomplished by scheduling state3 2.5 units in the future by CALL_THREAD. Notice how state values were passed to state3 by a combination of thread-var (message) and shared variables (len). )