2.3 Error Reduction Algorithm for Self-Calibration of Ultrasonic Flow Meters
In recent years, the use of artificial intelligence in flow metering have attracted researchers’ attention. For example, neural networks and support vector regression algorithms have been applied to the data from temporal and spatial ultrasonic level measurements of the drilling fluid in the open channel to estimate the flow rate (Chhantyal et al. 2017). The Least Square Error Reduction technique and neural networks method have been used for self-calibration of ultrasonic water flow meter (Yazdanshenashad et al. 2018; Catak and Ergan 2019). However, none of these self-calibration exercises involves the use of transit-time ultrasonic flow meter in multiphase flow such as water-bentonite mixture flow.
Catak and Ergan (2019) reported using the least square error method for the self-calibration of ultrasonic water flow meter. Three common least square errors calibration methods have been employed to the data obtained from DN-20 type ultrasonic flow meter, namely, Linear Least Squares (LLS), Weighted Least Squares (WLS), and Piecewise Linear Least Squares (PLR). The results presented found PLR gave the best results in all cases, while WLS was the best for higher flow rates. Both WLS and LLS were especially not adequate for low level of flowrate. For example, the flowmeter accuracy at 10 L/h (0.167 L/min) was around 5-8%, but after calibration, the improvement was only about 0.8-1.3%. (Catak and Ergan 2019).
In a recent report, Yazdanshenashad et al. (2018) use the Multi-Layer Perceptron Neural Network (MLPNN) model to calibrate an ultrasonic flow meter to achieve an error smaller than 1.5%. The measured flow range was from 0.2 to 4 m3 per hour. However, it was only aimed to reduce systematic errors. The authors did not report on the improvement in random errors which could be revealed by the change in standard deviation value of the errors. The authors have instead suggested reducing random errors by averaging large number of data. But the main drawback of this approach is the loss in data resolution. If it has to average 1000 data to significantly reduce the random error, it also means the resolution of data would be compromised by 1000 times.