CIS 307: Measuring Time

[For the use of dates see this program.]
Measuring time on the computer is non trivial. Here are some of the things we may want to do with time.

In all these cases we have a problem of Precision and of Accuracy. By Precision we mean how finely we can specify time, in seconds, milliseconds, microseconds, and even nanoseconds. By Accuracy we mean that the measure truly represents reality. For example, we can have a clock that is precise down to the nanosecond but that gives a time that is off by 0.1 seconds, i.e. its accuracy is only 0.1 seconds. My wrist-watch is precise down to the second, but it is very inaccurate: it is usually about ten minutes off. Since the measurament of durations, i.e. of time intervals, is affected by intervening events (interrupts, context switches, availability of needed buffers, etc.), it is often inaccurate. We need usually to determine durations as the average of a number of repeated experiments.

Another problem is that "the current time" may mean a number of different things. It could be the standard Greenwich time. Or it could be the time in our time zone. Or it could be the time, measured in some unit (called a tick) since a standard initial time, called the epoch, usually set at 00:00:00 on 1 Jan 1970 at Greenwich.

We can use the function time to determine the time it takes to execute a command or program. But the precision of this command is low, depending on the version, tenths or hundredths of a second. We can request the system to suspend the execution of a program with the sleep command. Infortunately the interval can only be specified in seconds.

We will consider some data types and commands that we can use for dealing with time. I highly recommend that you use these tools to measure accurately and precisely the time it takes to execute various system services and some of your code.

Here are some useful functions (gettimeofday and ctime_r) and data type (timeval) available in Unix:


  #include <sys/time.h>
  int gettimeofday(
          struct timeval *tp, /* structure where time will be recorded*/
          void *tzp ); /* The timezone, we use NULL */
  struct timeval {
        time_t  tv_sec;         /* seconds */
        int     tv_usec;        /* microseconds */
  };

   int ctime_r(const time_t *timer, char *buffer, int len);
	  timer:  number of seconds since epoch (1 jan 1970 at 00:00:00)
	  buffer: array where ctime_r will save the representation of
	      timer as a date string
	  len: size of buffer.
          Be sure to compile programs containing ctime_r using the
          library libc_r.a, i.e. use the compilation command modifier
          -lc_r
We can use these Standard C functions to implement three useful functions:

#define TIMLEN 60

void timeval2string(struct timeval *ts, char buffer[], int len)
/* It reads the time from ts and puts it in buffer (of size len) */
/* as a string with microsecond precision*/
{
     char year[5];
     ctime_r(&(ts->tv_sec), buffer, len);
     /* ctime_r terminates the time with a carriage return. We eliminate it.*/
     /* We report time in milliseconds. That is the precision on our system,*/
     /* as it can be determined using the clock_getres function*/
     buffer[24] = '\0';
     strcpy(year, &buffer[20]);
     sprintf(&buffer[17], "%6d %s", (ts->tv_usec), year);
}

double currentTime(void)
  /* It returns the current time as a real number: the whole part
   * is the number of seconds, the fractional part is the number of
   * microseconds since the epoch (1 jan 1970 at 00:00:00)
   */
{
     struct timeval tval;
     gettimeofday(&tval, NULL);
     return (tval.tv_sec + tval.tv_usec/1000000.0);
}

char * currentDayTime(void)
  /* It returns the current time as a character string, down to
   * the microsecond.
   */
{
     struct timeval tval;
     char *buffer = malloc(TIMLEN);

     gettimeofday(&tval, NULL);
     timeval2string(&tval, buffer, TIMLEN);
     return buffer;
}
Here is a program where we determine the time it takes to fork a child, and here is some of the output from this program.

We can use these functions to determine the time it takes to execute a Unix command. This is done in the following program. Assuming that the corresponding executable has been placed in the file timeval, one can determine the time required to execute a shell command such as who with the call

    % timeval who
You may want to compare the values you get with timeval and time. Here is the timeval program:

Notice that we have called gettimeofday before and after the fragment we want to time, without printing out any information in between. This is so as to get a measure that does not include extraneous printing activities.
When doing measurements it is wise to do them a number of times and compute average and standard deviation. This way we can eliminate some of the effects of random errors.