Arama Sonuçları

Listeleniyor 1 - 2 / 2
  • Yayın
    Maintenance policy analysis of the regenerative air heater system using factored POMDPs
    (Elsevier Ltd, 2022-03) Kıvanç, İpek; Özgür Ünlüakın, Demet; Bilgiç, Taner
    Maintenance optimization of multi-component systems is a difficult problem. Partially Observable Markov Decision Processes (POMDPs) are powerful tools for such problems under uncertainty in stochastic environments. In this study, the main POMDP solution approaches and solvers are surveyed. Then, based on experimental models with different complexities in the size of the system space, selected POMDP solvers using different representation patterns for modeling and different procedures for updating the value function while solving are compared. Furthermore, to show that factored representations are advantageous in modeling and solving the maintenance problem of multi-component systems where there exist also stochastic dependencies among the components, the maintenance problem of the one-line regenerative air heater system available in thermal power plants is modeled and solved with factored POMDPs. In-depth sensitivity analyses are performed on the obtained policy. The results show that factored POMDPs enable compact modeling, efficient policy generation and practical policy analysis for the tackled problem. Furthermore, they also motivate the use of factored POMDPs in the generation and analysis of maintenance policies for similar multi-component systems.
  • Yayın
    Performance analysis of an aggregation and disaggregation solution procedure to obtain a maintenance plan for a partially observable multi-component system
    (Elsevier Sci Ltd, 2017-11) Özgür Ünlüakın, Demet; Bilgiç, Taner
    We analyze the performance of an aggregation and disaggregation procedure in giving the optimal maintenance decisions for a multi-component system under partial observations in a finite horizon. The components deteriorate in time and their states are hidden to the decision maker. Nevertheless, it is possible to observe signals about the system status and to replace components in each period. The aim is to find a cost effective replacement plan for the components in a given time horizon. The problem is formulated as a partially observable Markov decision process (POMDP). We aggregate states and actions in order to reduce the problem space and obtain an optimal aggregate policy which we disaggregate by simulating it using dynamic Bayesian networks (DBN). The procedure is statistically compared to an approximate POMDP solver that uses the full state space information. Cases where aggregation performs relatively better are isolated and it is shown that k-out-of-n systems belong to this class.