Download presentation
Presentation is loading. Please wait.
Published byHerbert Rosamond Modified over 9 years ago
1
SCADA-Based Condition Monitoring Joining advanced analytic techniques with turbine engineering Michael Wilkinson EWEA 2013, Vienna
2
SCADA-Based Condition Monitoring What is it? Failure detection algorithm that uses existing SCADA data Uses an established relationship between SCADA signals to detect when a component is operating abnormally Compares suspected failures to a database of known issues to determine likelihood of an emerging problem What is it not? High-frequency vibration monitoring An automatic algorithm
3
Winding temps Main bearing temp Gearbox bearing temps Gearbox oil sump temp Generator bearing temps Generator rotational speed Gate temperatures Phase Voltages & Currents Nacelle internal ambient temp Cooling system temps External ambient temp SCADA-Based Condition Monitoring Main bearing Pitch angle Rotor rotational speed Exported power Nacelle anemometer wind speed Yaw angle Hub and pitch system Gearbox Generator Power converter Transformer What signals are available?
4
Comparison of Methods: Temperature Trending Simple method Readily applied to many datasets Low reliability during intermittent or changing operational modes
5
Comparison of Methods: Artificial Neural Networks Learning algorithm used to reveal patterns in data or model complex relationships between variables More sensitive to ‘abnormal’ behaviour Inability to identify nature of the operational issue Results difficult to interpret
6
Comparison of Methods: Physical Model Heat Loss to Surroundings Heat Loss to Cooling System Depends on nacelle and external temperature and cooling system duty Model using nws 3 Include ambient temperature and pressure if available. Frictional Losses Dependent on shaft speed (use rotor speed or generator speed in model) T SCADA System Energy Output WIND TURBINE DRIVETRAIN COMPONENT Energy Input Model inputs: Nws 3, power, rotor speed, external temp, cooling system temp Model output: Component temp Model using export power
7
Comparison of Methods: Conclusions Criteria Signal Trending SOM Physical Model Time and effort to initiate a new model for each turbine analysis 321
8
Comparison of Methods: Conclusions Criteria Signal Trending SOM Physical Model Time and effort to initiate a new model for each turbine analysis 321 Ability to incorporate a wide range of model inputs 132
9
Comparison of Methods: Conclusions Criteria Signal Trending SOM Physical Model Time and effort to initiate a new model for each turbine analysis 321 Ability to incorporate a wide range of model inputs 132 Ease of identifying impending component failure 213
10
Comparison of Methods: Conclusions Criteria Signal Trending SOM Physical Model Time and effort to initiate a new model for each turbine analysis 321 Ability to incorporate a wide range of model inputs 132 Ease of identifying impending component failure 213 Ability to distinguish component deterioration from operational or environmental fluctuations 123
11
Comparison of Methods: Conclusions Criteria Signal Trending SOM Physical Model Time and effort to initiate a new model for each turbine analysis 321 Ability to incorporate a wide range of model inputs 132 Ease of identifying impending component failure 213 Ability to distinguish component deterioration from operational or environmental fluctuations 123 Ability to detect impending failures in advance 213
12
Comparison of Methods: Conclusions Criteria Signal Trending SOM Physical Model Time and effort to initiate a new model for each turbine analysis 321 Ability to incorporate a wide range of model inputs 132 Ease of identifying impending component failure 213 Ability to distinguish component deterioration from operational or environmental fluctuations 123 Ability to detect impending failures in advance 213 Total Score 9912
13
Validation Study Series of blind tests were conducted Historical data Engineer given no indication of known failures Suspected impending failures documented 472 turbine-years of data considered Compared against service records and site management reports SiteLocationOperational Data Set Years AItaly4.8 BIreland6.0 CIreland6.5 DUK7.0 EUK2.5
14
Validation Study: Example Results Both charts show different signals on same turbine: Modelled Temperature Rotor Side High Speed Bearing Model Inputs Generator Speed Power Nacelle Temperature Failed Component Gearbox Advance notice 9 months Modelled Temperature Gen Side Intermediate Speed Bearing Model Inputs Generator Speed Power Nacelle Temperature Failed Component Gearbox Advance notice 7 months T ACTUAL –T MODELLED
15
Validation Study: Results SiteLocationOperational Data Set Years Predicted failures AItaly4.87 BIreland6.07 CIreland6.51 DUK7.05 EUK2.57
16
Validation Study: Results SiteLocationOperational Data Set Years Predicted failures Actual Failures True Detections False Detections Score True / False AItaly4.8787088% / 0% BIreland6.0786175% / 13% CIreland6.5141025% / 0% DUK7.0565083% / 0% EUK2.57105250% / 20%
17
Validation Study: Results SiteLocationOperational Data Set Years Predicted failures Actual Failures True Detections False Detections Score True / False AItaly4.8787088% / 0% BIreland6.0786175% / 13% CIreland6.5141025% / 0% DUK7.0565083% / 0% EUK2.57105250% / 20% Two thirds of failures detected in advance
18
Validation Study: Results Majority of failures detected 4 to 12 months in advance
19
Summary & Conclusions Comparison of methods: Temperature trending, physical model and artificial neural network methods compared Physical model identified as most promising
20
Summary & Conclusions Comparison of methods: Temperature trending, physical model and artificial neural network methods compared Physical model identified as most promising Validation study performed: Two thirds of failures detected in advance Majority of failures detected 4 to 12 months in advance
21
Summary & Conclusions Comparison of methods: Temperature trending, physical model and artificial neural network methods compared Physical model identified as most promising Validation study performed: Two thirds of failures detected in advance Majority of failures detected 4 to 12 months in advance Overall conclusions: Quick implementation – no additional monitoring hardware required Pro-active maintenance/repair activities to be scheduled and planned Targeted inspections possible
22
Questions or comments? Michael Wilkinson GL Garrad Hassan +44 117 972 9900 michael.wilkinson@gl-garradhassan.com
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.