Skip to content

XinzeLee/PANN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PANN: Physics-in-Architecture Neural Network for Power Electronics Modeling

LinkedIn ORCID GitHub ResearchGate Colab

  • Reference 1:
    X. Li et al., "Temporal Modeling for Power Converters With Physics-in-Architecture Recurrent Neural Network," in IEEE Transactions on Industrial Electronics, vol. 71, no. 11, pp. 14111-14123, Nov. 2024.
    DOI IEEE ResearchGate
  • Reference 2:
    X. Li, F. Lin, X. Zhang, H. Ma and F. Blaabjerg, "Data-Light Physics-Informed Modeling for the Modulation Optimization of a Dual-Active-Bridge Converter," in IEEE Transactions on Power Electronics, vol. 39, no. 7, pp. 8770-8785, July 2024.
    DOI IEEE ResearchGate
  • Reference 3:
    F. Lin, X. Li, X. Zhang and H. Ma, "STAR: One-Stop Optimization for Dual-Active-Bridge Converter With Robustness to Operational Diversity," in IEEE Journal of Emerging and Selected Topics in Power Electronics, vol. 12, no. 3, pp. 2758-2773, June 2024.
    DOI IEEE ResearchGate
  • Reference 4:
    X. Li et al., "A Generic Modeling Approach for Dual-Active-Bridge Converter Family via Topology Transferrable Networks," in IEEE Transactions on Industrial Electronics.
    DOI IEEE ResearchGate

  • Colabs:
    Colab Colab Colab Colab

Description

I. PANN and its Structure

PANN, physics-in-architecture neural network, is a physics-informed neural network specifically for the modeling of power electronics systems, leveraging physics-crafted recurrent neural structure to embed discretized state-space circuit equations. PANN introduces inductive biases by embedding discretized PDEs directly into the network architecture, revealing data invariant directly. The neural architecture of PANN is shown in Fig. 1. Structure of PANN.
Fig. 1. Structure of PANN.

II. PANN Inference

PANN inference is conducted by recurrently predicting the next state variables using the precomputed input variables and the inferred state variables from previous iteration. The PANN inference unfolded over time is given in Fig. 2. PANN Inference
Fig. 2. Structure of PANN.

III. PANN's Explainability in Power Electronics

PANN model is explainable in power electronics, revealing circuit physical principles, switching behaviors, commutation loops, etc. Those power electronics insights discovered by PANN for an exemplary non-resonant Dual-Active-Bridge converter are shown in Fig. 3. PANN's Explainability
Fig. 3. PANN's physical explainability.

IV. PANN Training

The training workflow of PANN is shown in Fig. 4, and one training epoch for PANN is shown in Fig. 5. PANN's training workflow
Fig. 4. PANN's training workflow.

One training epoch for PANN
Fig. 5. One training epoch for PANN.

V. PANN is Data-Light and Lightweight

The PANN is data-light, as it directly embeds circuit physical principles into its neural architecutre, ensuring stringent physical consistency. PANN requires only few data samples for training, reducing data requirements by 3 orders of magnitude. Theoretically, PANN only requires a dataset containing time-series points no less than the number of defined converter parameters. Additionally, PANN is similar to a single-layer recurrent neural network with only a few neural parameters, so it exhibits lightweight advantage. The light advantages of PANN are illustrated in Fig. 6. PANN is light AI model
Fig. 6. Data-light and lightweight advantages of PANN.

VI. PANN is Flexible

The PANN is flexible in terms of four main aspects: operating conditions, modulation strategy, performance metrics, and circuit parameters and topological variants, as summarized in Fig. 7. PANN is flexible
Fig. 7. Flexibility of PANN.

VII. Customize PANN to your Application/Converter of Interests

The steps shown in Fig. 8 can be followed to customize PANN to your specific application or converter of interests. Fig. 8 showcases the modeling of non-resonant DAB converters. PANN for DAB
Fig. 8. Case study: Design PANN for DAB converters.


PANN Tutorial

A comprehensive tutorial of PANN is given in PANN_Tutorial.pdf, with the main topic of "The Next Generation of AI for Power Electronics: Explainable, Light, and Flexible". The slides are prepared by Xinze Li and Fanfan Lin.

It covers basic introduction to applications of AI in power electronics, brief discussion of physics-informed machine learning methods, PANN inference and its explainability, PANN training and its light characteristics, and PANN's flexibility across diverse conditions and topologies (out-of-domain transfer capability).

Deploy PANN

To deploy PE-GPT on your PC, the first step is to setup your API call to OpenAI models, please see core/llm/llm.py for more details.
If you want to interact with Plecs software to simulate the designed modulation for DAB, you need to enable the xml-rpc interface in Plecs settings.

# clone the github repository
git clone https://github.com/XinzeLee/PANN

# change the current working directory
cd PANN

# install all required dependencies
pip install -r requirements.txt

# Now you can import the customized PANN models defined.
# It is recommended to go through the notebooks first, before you start to implement on your own.



Run Notebooks on Google Colab

Although it is strongly recommended to try out those notebooks on your local machine (as the graphical plots will be easier to view and play with), We have still added a few Google Colab notebooks.

Code Author

@code-author:



Notes

This repository provides a simplified version of the PE-GPT methodology presented in our journal paper. Despite the simplifications, the released code preserves the overall core architecture of the proposed PE-GPT.
This repository currently includes the following functions/blocks: Retrieval augmented generation, LLM agents, Model Zoo (with a physics-in-architecture neural network deployed in ONNX engine for modeling DAB converters), metaheuristic algorithm for optimization, simulation verification, graphical user interface, and knowledge base. Please note that the current knowledge base is a simplified version for illustration.



License

This code is licensed under the Apache License Version 2.0.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages