FAQ(s)

⌘K
  1. Home
  2. Docs
  3. FAQ(s)
  4. Application
  5. Aggregations (Derived channels menu) explained

Aggregations (Derived channels menu) explained

Q: Aggregations, how do they work on the Data Logger?

A: There are eight Aggregation channels available. Each channel has it’s own memory space for 600 samples.

The “Input parameter”, ”Input type” and “Aggregation period” have to be configured in the menu.

A sample is taken and stored and the outcome is calculated on the Data Logger “Sample interval”.

When a period of 24 hours is chosen with a Sample Interval of 1 minute, there are 1400 samples needed for the Average.  How does it work? There is no “time variable” available for user calculations or to get the value on a fixed time (like 23:59).

During the Aggregation period, the Average is “building up”. (Rolling Average). After a Data logger reboot or configuration action, it will take the full Aggregation period, before the Average is actual. Is this a problem? No, from that point on, the average value will be actual. (As long as the Data logger keeps on running).

T: Sample usage.

There are two situations:

  • The number of samples is less  than or equal to 600 during the Aggregation period.

Sample interval = 1 sec., Aggregation period = 10 minutes (= 600 sec). 600 samples needed, does fit into buffer.

There are enough free memory space for the number of samples for the Aggregation period.

  • The number of samples is more than 600 during the Aggregation period.

Using this configuration, the 600 samples are used as a “ring buffer” (last-in, first-out).

Sample interval = 1 sec, Aggregation period = 24 hours (= 86400 sec.). 86400 samples needed, does not fit into 600 sample buffer. The average is calculated every second, using the stored average value and the 600 last taken samples. After 86400 samples, the value is available for the chosen “24 hours”. From that point, the average is actual and updated every second.

How can we help?