Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uq workflow #24

Open
wants to merge 23 commits into
base: master
Choose a base branch
from
Open

Uq workflow #24

wants to merge 23 commits into from

Conversation

SteSeg
Copy link
Collaborator

@SteSeg SteSeg commented Oct 23, 2024

Added the engine for the TMC (and fTMC) UQ method. The code is almost all in ofb.data. Moreover there is the ResultsFromTMC class in read_results.py.

Here some key features:
In data_conventions.py it is possible to convert nuclide ids from ZAID to ZAM to GNDS formats (and vice versa) as often sandy, openmc and NJOY2016 take in different nuclide id formats (zaid_to_zam(), get_nuclide_zaid(), get_nuclide_gnds())

In sandy_wrapper.py it is possible to call sandy to perturb cross sections. It is possible to generate perturbed ace with get_ace_files(). It is possible to specify the reaction channel to perturb, the number of samples to generate and from which nuclear data library take the nuclear data. It is necessary to have NJOY as environment variable pointing to NJOY's package folder. Tested with NJOY2016. The ace_to_hdf5() function deploys the openmc python api to convert the perturbed ace file to openmc-friendly hdf5 files. It is possible to have it remove the ace files in the process (remove_ace=True). Perturb_to_hdf5() automatically runs these two functions. The whole process generates a given amount of perturbed cross sections file for a given nuclide in a folder named after the nuclide and the nuclear data library generated in the "current working directory".

EXAMPLE: use the sandy wrapper to generate 500 perturbed openmc-compatible cross section hdf5 files for Be9 from endf\b-8.0 library, using 1 process.

import openmc_fusion_benchmarks as ofb

ofb.data.perturb_to_hdf5(nsamples=500, lib_name='ENDFB_80', nuclide='Be9', reaction=None, nprocesses=1)

Functions in modify_xs_xml.py take care of the modification of the cross_section.xml file that openmc reads in order to implement perturbed cross sections data. get_env_variable() gets the path of a given environment variable (arguably, "OPENMC_CROSS_SECTIONS"). rewrite_xs_xml() creates a copy of the cross_sections.xml file (useful for avoiding perturbing the original one associated with the env variable). perturb_xs_xml() substitutes in a given cross_sections.xml file (arguably, the copied one) the original path to a nuclide xs hdf5 file another one (most likely a perturbed one).

EXAMPLE: create a local copy of the cross_section.xml file pointed by the 'OPENMC_CROSS_SECTIONS' (ofb.data.get_env_variable(var_name='OPENMC_CROSS_SECTIONS') already embedded in ofb.data.rewrite_xs_xml()) env variable and substitute the original Be9 cross section with the my_Be9.h5

import openmc_fusion_benchmarks as ofb

ofb.data.rewrite_xs_xml(new_xs_file='my_cross_sections.xml')
ofb.data.perturb_xs_xml(xs_file='my_cross_sections.xml', xs_h5_file='my_Be9.h5', nuclide='Be9')

In tmc_engine.py there is the core of TMC-UQ algorithm. The tmc_engine() function takes in a openmc.Model object to run the UQ on. It also takes in all the necessary info to run the sandy functions. There is a preprocessing phase that generates the perturbed hdf5 files requested (it is possible to skip this part with perturb_xs=False, in the case perturbed xs hdf5 files have already been generated previously). Then it enters the TMC loop. Each step in the loop is an openmc simulation of the model with a different hdf5 file for the nuclide of choice. here what happens at every step:

  • generates a new cross_section.xml file to modify at each step (rewrite_xs_xml())
  • in the file, for the specified nuclide, it changes the path of the xs to the perturbed hdf5 file and makes openmc pointing to this modifed xml file
  • runs the openmc model
  • opens the statepoint.*.h5 file, extract ALL the tallies and stores them in a tmc_results_{nuclide_name}.h5 in a pandas dataframe format, it saves the tallies with the same name they've been given in the openmc model adding an underscore and the tmc loop step number (e.g. "tallyname_n", where n is the n-th tmc loop step)
  • deletes the summary.h5 and statepoint.*.h5 files

There is also the jade_sphere model, inspired by JADE's original sphere benchmark model (more info in jade default models). The model is simple enough to run fast in a TMC algorithm and test nuclear data uncertainties (as well as the TMC-UQ workflow) with no additional complexities. It takes in a given material and gives the leakage fraction, and the flux on the material. It is possible to specify additional reactions to tally in the material (both from neutrons and photons).

EXAMPLE: test the whole tmc_engine() worflow for Be9 with the jade_sphere model. Includes the generation of a local cross_sections_mod.xml copied from the original pointed by the xs env variable, generation of 500 perturbed xs hsd5 files (perturb_xs=True) and the generation of a tmc_results_Be9.h5 file. The nreactions argument in the jade_sphere() model specifies for additional reactions to tally in the material other than leakage and flux, specifically for neutrons.

import openmc
import openmc_fusion_benchmarks as ofb

# preparation of the jade sphere model
be9 = openmc.Material(name='Be9')
be9.add_element('Be', 1.0)
be9.set_density('g/cm3', 1.85)

tally_reactions = ['(n,2n)', '(n,Xt)', 'elastic', 'absorption']
model = ofb.data.jade_sphere(material=be9, particles=int(1e5), photon_transport=False, nreactions=tally_reactions)

# run the TMC algorithm
ofb.data.tmc_engine(model=model, 
                    nsamples=500, 
                    lib_name='endfb_80', 
                    nuclide='Be9', 
                    pert_reaction=None, 
                    perturb_xs=False)

The ResultsFromTMC class in read_results.py helps reading the tmc_results_{nuclidename}.h5 file. It lists all the tallies (like ALL, "tallyname_0, tallyname_1, etc.") or only the tally names, tells the number of iterations of the TMC loop, extracts all the tally means and standard deviations in numpy array format.

EXAMPLE: read the TMC results for Be9.

results = ofb.ResultsFromTMC('tmc_results_Be9.h5`)   # load results
results.list_tallies()  # print all tallies: ['n_(n,2n)_0', 'n_(n,2n)_1', ..,  'n_(n,2n)_499', ..,  'n_(n,Xt)_0',  'n_(n,Xt)_1',  'n_(n,Xt)_499', etc.)]
results.list_tally_names()  # print only the names of the tallies: ['n_(n,2n)', 'n_(n,Xt)', 'n_elastic', 'n_absorption', 'n_leakage', 'n_flux']
leakage_means = results.get_means('n_leakage')  # gives np.array with all 500 means of the given tally
leakage_stds = results.get_stds('n_leakage')  # gives np.array with all 500 standard deviations of the given tally

To do before merging:

  • refactoring of read_results.py with ABC
  • documentation to jade_sphere, sandy_wrapper, tmc_engine
  • possible refactoring of stats_analysis.py
  • postprocessing notebooks
  • add sandy as dependency
  • add njoy2016 as dependency
  • maybe allow for conversions of tallies (MT <--> strings) in data_conventions.py
  • possible helpers for postprocessing and analysis

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant