compguy22
Mechanical
- Mar 14, 2022
- 57
Hello,
I am recording Amps on motor's which are driving rotary screw compressors. Total is two compressor units, each of which is discharging into a common header. Amps are recorded at motor's electrical supply ; the objective is to post calculate, starting from the Amps, each unit flowrate profile over time. Subsequently, the sum of the two flowrates will allow to establish how the total flow demand fluctuates (feeding downstream users).
Problem is that the two units are not sampling/logging Amps measurements at same rate, thus the sampling time stamps are out of synch. For some reasons, sampling is not at regular time intervals, so data are irregular scattered.
One option I am evaluating is to use the machine with lowest sampling rate (call it machine A) to obtain reference sampling times or time steps. So the lowest resolution will be the driver or the limiting factor on the total resolution. These reference sampling times will not match one to one with the second unit (machine B) with highest sampling rate. To try to construct a match, I thought I could use linear interpolation on the second machine profile (for example, use 1D linear regression based on irregular scattered data).
Do you think this can be a sound approach? Is there any option that would more straightforward?
Somehow this links to theory of information or not? Should I post in another section of the forum?
Thank you
I am recording Amps on motor's which are driving rotary screw compressors. Total is two compressor units, each of which is discharging into a common header. Amps are recorded at motor's electrical supply ; the objective is to post calculate, starting from the Amps, each unit flowrate profile over time. Subsequently, the sum of the two flowrates will allow to establish how the total flow demand fluctuates (feeding downstream users).
Problem is that the two units are not sampling/logging Amps measurements at same rate, thus the sampling time stamps are out of synch. For some reasons, sampling is not at regular time intervals, so data are irregular scattered.
One option I am evaluating is to use the machine with lowest sampling rate (call it machine A) to obtain reference sampling times or time steps. So the lowest resolution will be the driver or the limiting factor on the total resolution. These reference sampling times will not match one to one with the second unit (machine B) with highest sampling rate. To try to construct a match, I thought I could use linear interpolation on the second machine profile (for example, use 1D linear regression based on irregular scattered data).
Do you think this can be a sound approach? Is there any option that would more straightforward?
Somehow this links to theory of information or not? Should I post in another section of the forum?
Thank you