-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New representativity calculations post processor #2058
base: devel
Are you sure you want to change the base?
Changes from 250 commits
60c2698
4e41935
eaacebf
91dc79f
06e8d9e
893a2a5
b295b40
7122e30
1015191
3f99563
36c487f
fb241c8
ee62eaa
b7183a8
730229d
90222cd
84a7308
da83600
92f4af4
9b21c84
fb76bbb
6e9c926
fca0fc5
00133e9
cf178e7
cfbc990
51f7adb
b41ea51
0a4d3a0
638c9fc
3a06bc0
1cb15bb
ad728d5
fd652ed
edb813f
e52eb18
b919e3b
637bac0
ef014bd
27c865f
bdbbfc8
1a59991
234e503
9ac5adb
27e74d1
6c0dab1
bc430a7
a52797d
a385f7a
3d05b6f
da66d51
74963b4
8a04eb1
2571447
aa745a2
74b4aec
3c62067
d3b2d86
cd29182
ecfc83a
a26f2b5
a065eb1
23b0233
ba310c6
81193a4
e884921
9fd97f9
5935b37
df17bd8
e6278c8
a4ee9b1
4aae9c2
be21ac4
8b71716
3d54347
e6d3405
9343309
63a5f63
300c62b
5860c60
c5918b6
d4734a4
7434b15
eb4987e
ded41e5
d53bd8e
a18df7e
f89455d
f11c3b0
55d3686
5344a41
d8c4e84
b0d2f52
06ac3f0
eddd98c
ad3327d
97b804c
1713ef2
dff236b
85f9ca2
ef6507a
89efcba
fb4d82a
b6d5f89
c4318bd
f23adfd
b24af4c
9303eab
7b47047
1c720ff
dbb0ac1
f772a1b
709d6ee
eb9082f
8587706
b33e09b
a90360b
fb40d49
9666d56
b0eb148
e7c9154
c694c32
02fa320
69bf84e
5cbede2
1977bf6
b06e178
e525730
aa30de3
24c9215
dc16803
5cab996
1806889
9cd1b03
2678917
5da7950
01a87fb
bbef143
4c280cc
73cf876
fb773e8
edf8fcc
d6f9038
d5fe3e4
9d22667
56b4401
d605ee0
66faee7
19cd707
6b81b9e
ed2dd4f
d780bda
8203734
9950169
9ed4c36
e1d4868
d5cc347
f59508b
6df88b6
efb48e7
3d2b542
81659af
194f282
3776f12
223b590
9b12b89
5a09647
9f503a0
b373759
a39860d
2ecbf88
9e0c5e0
51175d8
21408db
289f026
06e2146
cdf7811
2e7670a
f5696b5
4964065
0b09fe7
0f0ad01
570036b
52d3ded
dc19f37
df43985
34a0c05
751902d
d0afbdf
3c7c56f
cdd6c51
573692f
98c115f
b172882
0d532f4
e480c12
e5589b8
2056274
10c719d
eb0fde7
d7297bc
fa437b4
211e54b
194c010
44214f6
34d2d8c
bcf7b77
bac01e9
74c7e53
cadd061
bc4261f
eeec9da
b596c3c
408940d
09f7608
f5be7f7
493cd25
415da98
3e8850c
2940523
598671e
296a4da
9675884
a77cdb6
dd91167
1f87964
6f0d9dd
be6e811
0fd1ddb
7d14a7c
c504f74
38539ae
f8f82ff
abf38df
57b4678
8fe2c47
5a92e6b
80bfa38
91215f0
8c271cb
1d33eaa
28ba71c
5c29aec
396dbc5
87cc738
645644e
50fe87a
6d7d77b
548f9f7
4ca3eb7
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,12 +9,12 @@ \subsubsection{Validation PostProcessors} | |
\begin{itemize} | ||
\item \textbf{Probabilistic}, using probabilistic method for validation, can be used for both static and time-dependent problems. | ||
\item \textbf{PPDSS}, using dynamic system scaling method for validation, can only be used for time-dependent problems. | ||
% \item \textbf{Representativity} | ||
\item \textbf{Representativity}, using represntativity (bias) factor for validation, currently, can be used for static data. | ||
\item \textbf{PCM}, using Physics-guided Coverage Mapping method for validation, can only be used for static problems. | ||
\end{itemize} | ||
% | ||
|
||
The choices of the available metrics and acceptable data objects are specified in table \ref{tab:ValidationAlgorithms}. | ||
The choices of the available metrics and acceptable data objects are specified in table~\ref{tab:ValidationAlgorithms}. | ||
|
||
\begin{table}[] | ||
\caption{Validation Algorithms and respective available metrics and DataObjects} | ||
|
@@ -23,6 +23,7 @@ \subsubsection{Validation PostProcessors} | |
\hline | ||
\textbf{Validation Algorithm} & \textbf{DataObject} & \textbf{Available Metrics} \\ \hline | ||
Probabilistic & \begin{tabular}[c]{@{}c@{}}PointSet \\ HistorySet\end{tabular} & \begin{tabular}[c]{@{}c@{}}CDFAreaDifference\\ \\ PDFCommonArea\end{tabular} \\ \hline | ||
Representativity & \begin{tabular}[c]{@{}c@{}}PointSet \\ HistorySet\end{tabular} & \begin{tabular}[c]{@{}c@{}}\end{tabular} \\ \hline | ||
PPDSS & HistorySet & DSS \\ \hline | ||
PCM & PointSet & (not applicable) \\ \hline | ||
\end{tabular} | ||
|
@@ -105,7 +106,7 @@ \subsubsection{Validation PostProcessors} | |
\item \xmlAttr{type}, \xmlDesc{required string attribute}, the sub-type of this Metric (e.g., SKL, Minkowski) | ||
\end{itemize} | ||
\nb The choice of the available metric is \xmlString{DSS}, please | ||
refer to \ref{sec:Metrics} for detailed descriptions about this metric. | ||
refer to~\ref{sec:Metrics} for detailed descriptions about this metric. | ||
\item \xmlNode{pivotParameterFeature}, \xmlDesc{string, required field}, specifies the pivotParameter for a feature <HistorySet>. The feature pivot parameter is the shared index of the output variables in the data object. | ||
\item \xmlNode{pivotParameterTarget}, \xmlDesc{string, required field}, specifies the pivotParameter for a target <HistorySet>. The target pivot parameter is the shared index of the output variables in the data object. | ||
\item \xmlNode{separateFeatureData}, \xmlDesc{string, optional field}, specifies the custom feature interval to apply DSS postprocessing. The string should contain three parts; start time, `|', and end time all in one. For example, 0.0|0.5. | ||
|
@@ -247,7 +248,7 @@ \subsubsection{Validation PostProcessors} | |
number of measurements should be equal to the number of features and in the same order as the features listed in \xmlNode{Features}. | ||
\end{itemize} | ||
|
||
The output of PCM is comma separated list of strings in the format of ``pri\textunderscore post\textunderscore stdReduct\textunderscore [targetName]'', | ||
The output of PCM is comma separated list of strings in the format of ``pri\textunderscore post\textunderscore stdReduct\textunderscore [targetName]'', | ||
where [targetName] is the $VariableName$ specified in DataObject of \xmlNode{Targets}. | ||
|
||
|
||
|
@@ -267,3 +268,84 @@ \subsubsection{Validation PostProcessors} | |
... | ||
<Simulation> | ||
\end{lstlisting} | ||
|
||
|
||
\paragraph{Representativity} | ||
The \textbf{Representativity} post-processor is one of three \textbf{Validation} post-processors, in fact there is a | ||
post-processor interface that acts as a gate for applying these validation algorithms | ||
(i.e., representativity, Physics-guided Convergence Mapping (PCM), and Dynamic System Scaling (DSS)). | ||
The post-processor is in charge of deploying a common infrastructure for the user of \textbf{Validation} problems. | ||
The representativity theory was first founded in the Neutronics community~\cite{Gandini, palmiotti1, palmiotti2}, then lately, was transformed to the thermal hydraulics~\cite{Epiney1, Epiney2}. | ||
|
||
% | ||
\ppType{Representativity}{Representativity} | ||
% | ||
|
||
\begin{itemize} | ||
\item \xmlNode{Features}, \xmlDesc{comma separated string, required field}, specifies the names of the features, which can be the measuables/observables of the mock model. Reader should be warned that this nomenclature is different than the machine learning nomenclature. | ||
|
||
\item \xmlNode{Targets}, \xmlDesc{comma separated string, required field}, contains a comma separated list of | ||
targets. These are the Figures of merit (FOMs) in the target model against which the mock model is being validated. | ||
|
||
\item \xmlNode{featureParameters}, \xmlDesc{comma separated string, required field}, specifies the names of the parameters/inputs to the mock/prototype model. | ||
|
||
\item \xmlNode{targetParameters}, \xmlDesc{comma separated string, required field}, specifies the names of the parameters/inputs to the target model. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As we discussed, it is better to change these names to more meaning names. |
||
|
||
\item \xmlNode{pivotParameter}, \xmlDesc{string, optional field}, ID of the temporal variable of the mock model. Default is ``time''. | ||
\nb Used just in case the \xmlNode{pivotValue}-based operation is requested (i.e., time dependent validation). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. maybe instead of "just in case" we say "Used in the event of time-series validation" or similar. Same with the next line. |
||
\item \xmlNode{targetPivotParameter}, \xmlDesc{string, optional field}, ID of the temporal variable in the target model. Default is ``time''. | ||
\nb Used just in case the \xmlNode{pivotValue}-based operation is requested (i.e., time dependent validation). | ||
Indeed, Both \textbf{PointSet} and \textbf{HistorySet} can be accepted by this post-processor. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Indeed" is a strange term to use here; I think it can be rephrased like "This allows both PointSet and HistorySet inputs to this postprocessor", or just "indeed" can be removed. Also, is this accurate, or does it take DataSets as well? |
||
If the name of given variable to be compared is unique, it can be used directly, otherwise the variable can be specified | ||
with $DataObjectName|InputOrOutput|VariableName$ nomenclature. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It seems users do not need to provide InputOrOutput. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with Congjian on this one. |
||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we remove the redundant empty lines in this next section? |
||
\textbf{Example:} | ||
\begin{lstlisting}[style=XML,morekeywords={subType}] | ||
<Simulation> | ||
... | ||
<Steps> | ||
<!--Multirun the prototype model--> | ||
<MultiRun name="mcRunExp" re-seeding="20021986"> | ||
<Input class="DataObjects" type="PointSet">inputPlaceHolder2</Input> | ||
<Model class="Models" type="ExternalModel">linModel</Model> | ||
<Sampler class="Samplers" type="MonteCarlo">ExperimentMCSampler</Sampler> | ||
<Output class="DataObjects" type="PointSet">outputDataMC1</Output> | ||
</MultiRun> | ||
<!--Multirun the target model--> | ||
<MultiRun name="mcRunTar" re-seeding="68912002"> | ||
<Input class="DataObjects" type="PointSet">inputPlaceHolder2</Input> | ||
<Model class="Models" type="ExternalModel">tarModel</Model> | ||
<Sampler class="Samplers" type="MonteCarlo">TargetMCSampler</Sampler> | ||
<Output class="DataObjects" type="PointSet">outputDataMC2</Output> | ||
</MultiRun> | ||
<!--Create the Representativity PostProcessor--> | ||
<PostProcess name="PP1"> | ||
<Input class="DataObjects" type="PointSet">outputDataMC1</Input> | ||
<Input class="DataObjects" type="PointSet">outputDataMC2</Input> | ||
<Model class="Models" type="PostProcessor">pp1</Model> | ||
<Output class="DataObjects" type="PointSet">pp1_metric</Output> | ||
<Output class="OutStreams" type="Print">pp1_metric_dump</Output> | ||
</PostProcess> | ||
</Steps> | ||
... | ||
<Models> | ||
<ExternalModel ModuleToLoad="../../../AnalyticModels/expLinModel.py" name="linModel" subType=""> | ||
<inputs>p1, p2, e1, e2, e3, bE</inputs> | ||
<outputs>F1, F2, F3</outputs> | ||
</ExternalModel> | ||
<ExternalModel ModuleToLoad="../../../AnalyticModels/tarLinModel.py" name="tarModel" subType=""> | ||
<inputs>p1, p2, o1, o2, o3, bT</inputs> | ||
<outputs>FOM1, FOM2, FOM3</outputs> | ||
</ExternalModel> | ||
<PostProcessor name="pp1" subType="Representativity"> | ||
<Features>outputDataMC1|F1, outputDataMC1|F2, outputDataMC1|F3</Features> | ||
<Targets>outputDataMC2|FOM1, outputDataMC2|FOM2, outputDataMC2|FOM3</Targets><!----> | ||
<featureParameters>outputDataMC1|p1,outputDataMC1|p2</featureParameters> | ||
<targetParameters>outputDataMC2|p1,outputDataMC2|p2</targetParameters> | ||
<pivotParameter>outputDataMC1|time</pivotParameter> | ||
<targetPivotParameter>outputDataMC2|time</targetPivotParameter> | ||
</PostProcessor> | ||
</Models> | ||
... | ||
<Simulation> | ||
\end{lstlisting} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -112,3 +112,49 @@ @TechReport{RAVENtheoryManual | |
year = {2016}, | ||
key = {INL/EXT-16-38178} | ||
} | ||
|
||
@book{Gandini, | ||
title={Uncertainty analysis and experimental data transposition methods based on perturbation theory}, | ||
author={Gandini, A}, | ||
journal={Uncertainty Analysis}, | ||
pages={217--258}, | ||
year={1988}, | ||
publisher={CRC Press, Boca Raton, Fla, USA} | ||
} | ||
|
||
@article{palmiotti1, | ||
title={A global approach to the physics validation of simulation codes for future nuclear systems}, | ||
author={Palmiotti, Giuseppe and Salvatores, Massimo and Aliberti, Gerardo and Hiruta, Hikarui and McKnight, R and Oblozinsky, P and Yang, WS}, | ||
journal={Annals of Nuclear Energy}, | ||
volume={36}, | ||
number={3}, | ||
pages={355--361}, | ||
year={2009}, | ||
publisher={Elsevier} | ||
} | ||
|
||
@article{palmiotti2, | ||
title={The role of experiments and of sensitivity analysis in simulation validation strategies with emphasis on reactor physics}, | ||
author={Palmiotti, Giuseppe and Salvatores, Massimo}, | ||
journal={Annals of Nuclear Energy}, | ||
volume={52}, | ||
pages={10--21}, | ||
year={2013}, | ||
publisher={Elsevier} | ||
} | ||
|
||
@article{Epiney1, | ||
title={A Systematic Approach to Inform Experiment Design Through Modern Modeling and Simulation Methods}, | ||
author={Epiney, A and Rabiti, C and Davis, C}, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this reference does not indicate if it is a journal of conf proceeding There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. updated |
||
year={2019} | ||
} | ||
|
||
@inproceedings{Epiney2, | ||
title={Representativity Analysis Applied to TREAT Water Loop LOCA Experiment Design}, | ||
author={Epiney, Aaron S and Woolstenhulme, Nicolas}, | ||
booktitle={International Conference on Nuclear Engineering}, | ||
volume={83785}, | ||
pages={V003T13A055}, | ||
year={2020}, | ||
organization={American Society of Mechanical Engineers} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -27,14 +27,14 @@ | |
#External Modules End----------------------------------------------------------- | ||
|
||
#Internal Modules--------------------------------------------------------------- | ||
from .PostProcessorInterface import PostProcessorInterface | ||
from .PostProcessorReadyInterface import PostProcessorReadyInterface | ||
from ...utils import utils | ||
from ...utils import InputData, InputTypes | ||
from ...utils import mathUtils | ||
from ... import Files | ||
#Internal Modules End----------------------------------------------------------- | ||
|
||
class BasicStatistics(PostProcessorInterface): | ||
class BasicStatistics(PostProcessorReadyInterface): | ||
""" | ||
BasicStatistics filter class. It computes all the most popular statistics | ||
""" | ||
|
@@ -164,92 +164,47 @@ def __init__(self): | |
self.sampleSize = None # number of sample size | ||
self.calculations = {} | ||
self.validDataType = ['PointSet', 'HistorySet', 'DataSet'] # The list of accepted types of DataObject | ||
self.inputDataObjectName = None # name for input data object | ||
self.setInputDataType('xrDataset') | ||
|
||
def inputToInternal(self, currentInp): | ||
def inputToInternal(self, inputIn): | ||
""" | ||
Method to convert an input object into the internal format that is | ||
Method to select corresponding data from Data Objects and normalize the ProbabilityWeight of corresponding data | ||
understandable by this pp. | ||
@ In, currentInp, object, an object that needs to be converted | ||
@ In, inputIn, dict, a dictionary that contains the input Data Object information | ||
@ Out, (inputDataset, pbWeights), tuple, the dataset of inputs and the corresponding variable probability weight | ||
""" | ||
# The BasicStatistics postprocessor only accept DataObjects | ||
self.dynamic = False | ||
currentInput = currentInp [-1] if type(currentInp) == list else currentInp | ||
if len(currentInput) == 0: | ||
self.raiseAnError(IOError, "In post-processor " +self.name+" the input "+currentInput.name+" is empty.") | ||
|
||
inpVars, outVars, dataSet = inputIn['Data'][0] | ||
pbWeights = None | ||
if type(currentInput).__name__ == 'tuple': | ||
return currentInput | ||
# TODO: convert dict to dataset, I think this will be removed when DataSet is used by other entities that | ||
# are currently using this Basic Statisitics PostProcessor. | ||
if type(currentInput).__name__ == 'dict': | ||
if 'targets' not in currentInput.keys(): | ||
self.raiseAnError(IOError, 'Did not find targets in the input dictionary') | ||
inputDataset = xr.Dataset() | ||
for var, val in currentInput['targets'].items(): | ||
inputDataset[var] = val | ||
if 'metadata' in currentInput.keys(): | ||
metadata = currentInput['metadata'] | ||
self.pbPresent = True if 'ProbabilityWeight' in metadata else False | ||
if self.pbPresent: | ||
pbWeights = xr.Dataset() | ||
self.realizationWeight = xr.Dataset() | ||
self.realizationWeight['ProbabilityWeight'] = metadata['ProbabilityWeight']/metadata['ProbabilityWeight'].sum() | ||
for target in self.parameters['targets']: | ||
pbName = 'ProbabilityWeight-' + target | ||
if pbName in metadata: | ||
pbWeights[target] = metadata[pbName]/metadata[pbName].sum() | ||
elif self.pbPresent: | ||
pbWeights[target] = self.realizationWeight['ProbabilityWeight'] | ||
else: | ||
self.raiseAWarning('BasicStatistics postprocessor did not detect ProbabilityWeights! Assuming unit weights instead...') | ||
else: | ||
self.raiseAWarning('BasicStatistics postprocessor did not detect ProbabilityWeights! Assuming unit weights instead...') | ||
if 'RAVEN_sample_ID' not in inputDataset.sizes.keys(): | ||
self.raiseAWarning('BasicStatisitics postprocessor did not detect RAVEN_sample_ID! Assuming the first dimension of given data...') | ||
self.sampleTag = utils.first(inputDataset.sizes.keys()) | ||
return inputDataset, pbWeights | ||
|
||
if currentInput.type not in ['PointSet','HistorySet']: | ||
self.raiseAnError(IOError, self, 'BasicStatistics postprocessor accepts PointSet and HistorySet only! Got ' + currentInput.type) | ||
|
||
# extract all required data from input DataObjects, an input dataset is constructed | ||
dataSet = currentInput.asDataset() | ||
try: | ||
inputDataset = dataSet[self.parameters['targets']] | ||
except KeyError: | ||
missing = [var for var in self.parameters['targets'] if var not in dataSet] | ||
self.raiseAnError(KeyError, "Variables: '{}' missing from dataset '{}'!".format(", ".join(missing),currentInput.name)) | ||
self.sampleTag = currentInput.sampleTag | ||
self.raiseAnError(KeyError, "Variables: '{}' missing from dataset '{}'!".format(", ".join(missing),self.inputDataObjectName)) | ||
self.sampleTag = utils.first(dataSet.dims) | ||
Jimmy-INL marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
if currentInput.type == 'HistorySet': | ||
if self.dynamic: | ||
dims = inputDataset.sizes.keys() | ||
if self.pivotParameter is None: | ||
if len(dims) > 1: | ||
self.raiseAnError(IOError, self, 'Time-dependent statistics is requested (HistorySet) but no pivotParameter \ | ||
got inputted!') | ||
self.raiseAnError(IOError, self, 'Time-dependent statistics is requested (HistorySet) but no pivotParameter \ | ||
got inputted!') | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "was provided" instead of "got inputted" |
||
elif self.pivotParameter not in dims: | ||
self.raiseAnError(IOError, self, 'Pivot parameter', self.pivotParameter, 'is not the associated index for \ | ||
requested variables', ','.join(self.parameters['targets'])) | ||
else: | ||
self.dynamic = True | ||
if not currentInput.checkIndexAlignment(indexesToCheck=self.pivotParameter): | ||
self.raiseAnError(IOError, "The data provided by the data objects", currentInput.name, "is not synchronized!") | ||
self.pivotValue = inputDataset[self.pivotParameter].values | ||
if self.pivotValue.size != len(inputDataset.groupby(self.pivotParameter)): | ||
msg = "Duplicated values were identified in pivot parameter, please use the 'HistorySetSync'" + \ | ||
" PostProcessor to syncronize your data before running 'BasicStatistics' PostProcessor." | ||
self.raiseAnError(IOError, msg) | ||
self.pivotValue = dataSet[self.pivotParameter].values | ||
if self.pivotValue.size != len(dataSet.groupby(self.pivotParameter)): | ||
msg = "Duplicated values were identified in pivot parameter, please use the 'HistorySetSync'" + \ | ||
" PostProcessor to syncronize your data before running 'BasicStatistics' PostProcessor." | ||
self.raiseAnError(IOError, msg) | ||
# extract all required meta data | ||
metaVars = currentInput.getVars('meta') | ||
self.pbPresent = True if 'ProbabilityWeight' in metaVars else False | ||
self.pbPresent = 'ProbabilityWeight' in dataSet | ||
if self.pbPresent: | ||
pbWeights = xr.Dataset() | ||
self.realizationWeight = dataSet[['ProbabilityWeight']]/dataSet[['ProbabilityWeight']].sum() | ||
for target in self.parameters['targets']: | ||
pbName = 'ProbabilityWeight-' + target | ||
if pbName in metaVars: | ||
if pbName in dataSet: | ||
pbWeights[target] = dataSet[pbName]/dataSet[pbName].sum() | ||
elif self.pbPresent: | ||
pbWeights[target] = self.realizationWeight['ProbabilityWeight'] | ||
|
@@ -267,6 +222,9 @@ def initialize(self, runInfo, inputs, initDict): | |
@ In, initDict, dict, dictionary with initialization options | ||
@ Out, None | ||
""" | ||
if len(inputs)>1: | ||
self.raiseAnError(IOError, 'Post-Processor', self.name, 'accepts only one DataObject') | ||
self.inputDataObjectName = inputs[-1].name | ||
#construct a list of all the parameters that have requested values into self.allUsedParams | ||
self.allUsedParams = set() | ||
for metricName in self.scalarVals + self.vectorVals: | ||
|
@@ -284,6 +242,8 @@ def initialize(self, runInfo, inputs, initDict): | |
inputObj = inputs[-1] if type(inputs) == list else inputs | ||
if inputObj.type == 'HistorySet': | ||
self.dynamic = True | ||
if not inputObj.checkIndexAlignment(indexesToCheck=self.pivotParameter): | ||
self.raiseAnError(IOError, "The data provided by the input data object is not synchronized!") | ||
inputMetaKeys = [] | ||
outputMetaKeys = [] | ||
for metric, infos in self.toDo.items(): | ||
|
@@ -1544,6 +1504,21 @@ def spearmanCorrelation(self, featVars, targVars, featSamples, targSamples, pbWe | |
da = xr.DataArray(spearmanMat, dims=('targets','features'), coords={'targets':targVars,'features':featVars}) | ||
return da | ||
|
||
def _runLegacy(self, inputIn): | ||
""" | ||
This method executes the postprocessor action with the old data format. In this case, it computes all the requested statistical FOMs | ||
@ In, inputIn, object, object contained the data to process. (inputToInternal output) | ||
@ Out, outputSet, xarray.Dataset or dictionary, dataset or dictionary containing the results | ||
""" | ||
if type(inputIn).__name__ == 'PointSet': | ||
merged = inputIn.asDataset() | ||
elif 'metadata' in inputIn: | ||
merged = xr.merge([inputIn['metadata'],inputIn['targets']]) | ||
else: | ||
merged = xr.merge([inputIn['targets']]) | ||
newInputIn = {'Data':[[None,None,merged]]} | ||
return self.run(newInputIn) | ||
|
||
def run(self, inputIn): | ||
""" | ||
This method executes the postprocessor action. In this case, it computes all the requested statistical FOMs | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a period after "post-processors"? "in fact there is" is a strange statement after a comma.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This description also seems a little focused on a software engineer's understanding of this interface. Can we make it more user-centric?