One of the most crucial challenges faced by the Li-ion battery community concerns the search for the minimum time charging without damaging the cells. This goal can be achieved by solving a large-scale constrained optimal control problem, which relies on accurate electrochemical models. However, these models are limited by their high computational cost, as well as identifiability and observability issues. As an alternative, simple output-feedback algorithms can be employed, but their performance strictly depends on trial and error tuning. Moreover, particular techniques have to be adopted to handle safety constraints. With the aim of overcoming these limitations, we propose an optimal-charging procedure based on deep reinforcement learning. In particular, we focus on a policy gradient method to cope with continuous sets of states and actions. First, we assume full state measurements from the Doyle-Fuller-Newman (DFN) model, which is projected to a lower dimensional feature space via the principal component analysis. Subsequently, this assumption is removed, and only output measurements are considered as the agent observations. Finally, we show the adaptability of the proposed policy to changes in the environment's parameters. The results are compared with other methodologies presented in the literature, such as the reference governor and the proportional-integral-derivative approach.

Park, S., Pozzi, A., Whitmeyer, M., Perez, H., Kandel, A., Kim, G., Choi, Y., Joe, W. T., Raimondo, D. M., Moura, S., A Deep Reinforcement Learning Framework for Fast Charging of Li-Ion Batteries, <<IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION>>, 2022; 8 (2): 2770-2784. [doi:10.1109/TTE.2022.3140316] [https://hdl.handle.net/10807/214127]

A Deep Reinforcement Learning Framework for Fast Charging of Li-Ion Batteries

Pozzi, Andrea;
2022

Abstract

One of the most crucial challenges faced by the Li-ion battery community concerns the search for the minimum time charging without damaging the cells. This goal can be achieved by solving a large-scale constrained optimal control problem, which relies on accurate electrochemical models. However, these models are limited by their high computational cost, as well as identifiability and observability issues. As an alternative, simple output-feedback algorithms can be employed, but their performance strictly depends on trial and error tuning. Moreover, particular techniques have to be adopted to handle safety constraints. With the aim of overcoming these limitations, we propose an optimal-charging procedure based on deep reinforcement learning. In particular, we focus on a policy gradient method to cope with continuous sets of states and actions. First, we assume full state measurements from the Doyle-Fuller-Newman (DFN) model, which is projected to a lower dimensional feature space via the principal component analysis. Subsequently, this assumption is removed, and only output measurements are considered as the agent observations. Finally, we show the adaptability of the proposed policy to changes in the environment's parameters. The results are compared with other methodologies presented in the literature, such as the reference governor and the proportional-integral-derivative approach.
2022
AREA01 - SCIENZE MATEMATICHE E INFORMATICHE
Articolo su rivista presente in Web of Knowledge o Scopus
Inglese
Articolo in rivista
Inglese
Actorcritic
approximate dynamic programming (ADP)
electrochemical model (EM)
fast charging
reinforcement learning (RL)
Settore IINF-04/A - Automatica
Settore IIND-08/B - Sistemi elettrici per l'energia
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
Institute of Electrical and Electronics Engineers Inc.
8
2
2022
2770
2784
15
info:eu-repo/semantics/article
Park, S., Pozzi, A., Whitmeyer, M., Perez, H., Kandel, A., Kim, G., Choi, Y., Joe, W. T., Raimondo, D. M., Moura, S., A Deep Reinforcement Learning Framework for Fast Charging of Li-Ion Batteries, <<IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION>>, 2022; 8 (2): 2770-2784. [doi:10.1109/TTE.2022.3140316] [https://hdl.handle.net/10807/214127]
none
262
Park, S.; Pozzi, Andrea; Whitmeyer, M.; Perez, H.; Kandel, A.; Kim, G.; Choi, Y.; Joe, W. T.; Raimondo, D. M.; Moura, S.
10
art_per_29
03. Contributo in rivista::Articolo in rivista, Nota a sentenza
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/214127
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 97
  • ???jsp.display-item.citation.isi??? 83
social impact