diff --git a/doc/user_manual/PostProcessors/Validation.tex b/doc/user_manual/PostProcessors/Validation.tex
index c530e0eba9..c5e7084082 100644
--- a/doc/user_manual/PostProcessors/Validation.tex
+++ b/doc/user_manual/PostProcessors/Validation.tex
@@ -9,7 +9,7 @@ \subsubsection{Validation PostProcessors}
\begin{itemize}
\item \textbf{Probabilistic}, using probabilistic method for validation, can be used for both static and time-dependent problems.
\item \textbf{PPDSS}, using dynamic system scaling method for validation, can only be used for time-dependent problems.
- % \item \textbf{Representativity}
+ \item \textbf{Representativity}, using representativity (bias) factor for validation, currently, can be used for static data.
\item \textbf{PCM}, using Physics-guided Coverage Mapping method for validation, can be used for static and time-dependent problems.
\end{itemize}
%
@@ -17,20 +17,22 @@ \subsubsection{Validation PostProcessors}
The choices of the available metrics and acceptable data objects are specified in table \ref{tab:ValidationAlgorithms}.
\begin{table}[]
+\centering
\caption{Validation Algorithms and respective available metrics and DataObjects}
\label{tab:ValidationAlgorithms}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Validation Algorithm} & \textbf{DataObject} & \textbf{Available Metrics} \\ \hline
Probabilistic & \begin{tabular}[c]{@{}c@{}}PointSet \\ HistorySet\end{tabular} & \begin{tabular}[c]{@{}c@{}}CDFAreaDifference\\ \\ PDFCommonArea\end{tabular} \\ \hline
+Representativity & \begin{tabular}[c]{@{}c@{}}PointSet \\ HistorySet \\DataSet\end{tabular} & \begin{tabular}[c]{@{}c@{}}\end{tabular} \\ \hline
PPDSS & HistorySet & DSS \\ \hline
-PCM & PointSet & (not applicable) \\ \hline
+PCM & \begin{tabular}[c]{@{}c@{}}PointSet \\ HistorySet\end{tabular} & \begin{tabular}[c]{@{}c@{}}\end{tabular} \\ \hline
\end{tabular}
\end{table}
These post-processors can accept multiple \textbf{DataObjects} as inputs. When multiple DataObjects are provided,
The user can use $DataObjectName|InputOrOutput|VariableName$ nomenclature to specify the variable
-in \xmlNode{Features} and \xmlNode{Targets} for comparison.
+in \xmlNode{prototypeOutputs} and \xmlNode{targetOutputs} for comparison.
\paragraph{Probabilistic}
The \textbf{Probabilistic} specify that the validation needs to be performed
@@ -42,10 +44,10 @@ \subsubsection{Validation PostProcessors}
%
\begin{itemize}
- \item \xmlNode{Features}, \xmlDesc{comma separated string, required field}, specifies the names of the features.
- \item \xmlNode{Targets}, \xmlDesc{comma separated string, required field}, contains a comma separated list of
- targets. \nb Each target is paired with a feature listed in xml node \xmlNode{Features}. In this case, the
- number of targets should be equal to the number of features.
+ \item \xmlNode{prototypeOutputs}, \xmlDesc{comma separated string, required field}, specifies the names of the prototype outputs.
+ \item \xmlNode{targetOutputs}, \xmlDesc{comma separated string, required field}, contains a comma separated list of
+ targetOutputs. \nb Each target output is paired with a feature listed in xml node \xmlNode{prototypeOutputs}. In this case, the
+ number of target outputs should be equal to the number of prototype outputs.
\item \xmlNode{pivotParameter}, \xmlDesc{string, required field if HistorySet is used}, specifies the pivotParameter for a .
The pivot parameter is the shared index of the output variables in the data object.
\item \xmlNode{Metric}, \xmlDesc{string, required field}, specifies the \textbf{Metric} name that is defined via
@@ -70,8 +72,8 @@ \subsubsection{Validation PostProcessors}
...
- outputDataMC1|ans
- outputDataMC2|ans2
+ outputDataMC1|ans
+ outputDataMC2|ans2cdf_diffpdf_area
@@ -90,12 +92,9 @@ \subsubsection{Validation PostProcessors}
%
\begin{itemize}
- \item \xmlNode{Features}, \xmlDesc{comma separated string, required field}, specifies the names of the features. Make sure the feature data are normalized by a nominal value.
- To enable user defined time interval selection, this postprocessor will only consider the first feature name provided. If user provides more than one,
- it will output an error.
- \item \xmlNode{Targets}, \xmlDesc{comma separated string, required field}, specifies the names of the targets. Make sure the feature data are normalized by a nominal value. \nb Each target is paired with a feature listed in xml node \xmlNode{Features}.
- To enable user defined time interval selection, this postprocessor will only consider the first feature name provided. If user provides more than one,
- it will output an error.
+ \item \xmlNode{prototypeOutputs}, \xmlDesc{comma separated string, required field}, contains a comma separated list of strings specifying the names of the prototype/mock model outputs.
+ \item \xmlNode{targetOutputs}, \xmlDesc{comma separated string, required field}, contains a comma separated list of
+ strings specifying target outputs.
\item \xmlNode{pivotParameter}, \xmlDesc{string, required field if HistorySet is used}, specifies the pivotParameter for a .
The pivot parameter is the shared index of the output variables in the data object.
\item \xmlNode{Metric}, \xmlDesc{string, required field}, specifies the \textbf{Metric} name that is defined via
@@ -108,18 +107,7 @@ \subsubsection{Validation PostProcessors}
refer to \ref{sec:Metrics} for detailed descriptions about this metric.
\item \xmlNode{pivotParameterFeature}, \xmlDesc{string, required field}, specifies the pivotParameter for a feature . The feature pivot parameter is the shared index of the output variables in the data object.
\item \xmlNode{pivotParameterTarget}, \xmlDesc{string, required field}, specifies the pivotParameter for a target . The target pivot parameter is the shared index of the output variables in the data object.
- \item \xmlNode{separateFeatureData}, \xmlDesc{string, optional field}, specifies the custom feature interval to apply DSS postprocessing. The string should contain three parts; start time, `|', and end time all in one. For example, 0.0|0.5.
- The start and end time should be in ratios or raw values of the full interval. In this case 0.5 would be either the midpoint time or time 0.5 of the given time units. This node is not required and if not provided, the default is the full time interval.
- the following attributes need to be specified:
- \begin{itemize}
- \item \xmlAttr{type}, \xmlDesc{optional string attribute}, options are `ratio' or `raw\_values'. The default is `ratio'.
- \end{itemize}
- \item \xmlNode{separateTargetData}, \xmlDesc{string, optional field}, specifies the custom target interval to apply DSS postprocessing. The string should contain three parts; start time, `|', and end time all in one. For example, 0.0|0.5.
- The start and end time should be in ratios or raw values of the full interval. In this case 0.5 would be either the midpoint time or time 0.5 of the given time units. This node is not required and if not provided, the default is the full time interval.
- the following attributes need to be specified:
- \begin{itemize}
- \item \xmlAttr{type}, \xmlDesc{optional string attribute}, options are `ratio' or `raw\_values'. The default is `ratio'.
- \end{itemize}
+ \item \xmlNode{multiOutput}, \xmlDesc{string, required field}, to extract raw values for the HistorySet. The user must use ‘raw values’ for the full set of metrics’ calculations to be dumped.
\item \xmlNode{scale}, \xmlDesc{string, required field}, specifies the type of time scaling. The following are the options for scaling (specific definitions for each scaling type is provided in \ref{sec:dssdoc}):
\begin{itemize}
\item \textbf{DataSynthesis}, calculating the distortion for two data sets without applying other scaling ratios.
@@ -129,27 +117,13 @@ \subsubsection{Validation PostProcessors}
\item \textbf{omega\_strain}, calculating the distortion for two data sets with scaling ratios for agent of changes.
\item \textbf{identity}, calculating the distortion for two data sets with scaling ratios of 1.
\end{itemize}
- \item \xmlNode{scaleBeta}, \xmlDesc{float, required field}, specifies the parameter of interest scaling ratio between the feature and target.
- \item \xmlNode{scaleOmega}, \xmlDesc{float, required field}, specifies the agents of change scaling ratio between the feature and target.
-\end{itemize}
-
-The output \textbf{DataObjects} has required and optional components to provide the user the flexibility to obtain desired postprocessed data. The following are information about DSS output \textbf{DataObjects}:
-\begin{itemize}
- \item \xmlNode{Output}, \xmlDesc{string, required field}, specifies the string of postprocessed results to output. The following is the list of DSS output names:
- \begin{itemize}
- \item \textbf{pivot\_parameter}, provides the pivot parameter used to postprocess feature and target input data.
- \item \textbf{total\_distance\_targetName\_featureName}, provides the total metric distance of the whole time interval. `targetName' and `featureName' are the string names of the input target and feature.
- \item \textbf{feature\_beta\_targetName\_featureName}, provides the normalized feature data provided from \textbf{DataObjects} input. `targetName' and `featureName' are the string names of the input target and feature.
- \item \textbf{target\_beta\_targetName\_featureName}, provides the normalized target data provided from \textbf{DataObjects} input. `targetName' and `featureName' are the string names of the input target and feature.
- \item \textbf{feature\_omega\_targetName\_featureName}, provides the normalized feature first order derivative data. `targetName' and `featureName' are the string names of the input target and feature.
- \item \textbf{target\_omega\_targetName\_featureName}, provides the normalized target first order derivative data. `targetName' and `featureName' are the string names of the input target and feature.
- \item \textbf{feature\_D\_targetName\_featureName}, provides the feature temporal displacement rate (second order term) data. `targetName' and `featureName' are the string names of the input target and feature.
- \item \textbf{target\_D\_targetName\_featureName}, provides the target temporal displacement rate (second order term) data. `targetName' and `featureName' are the string names of the input target and feature.
- \item \textbf{process\_time\_targetName\_featureName}, provides the shared process time data. `targetName' and `featureName' are the string names of the input target and feature.
- \item \textbf{standard\_error\_targetName\_featureName}, provides the standard error of the overall transient data. `targetName' and `featureName' are the string names of the input target and feature.
- \end{itemize}
+ \item \xmlNode{scaleBeta}, \xmlDesc{float or comma separated list of floats, required field}, specifies the parameter of interest scaling ratio between the feature and target.
+ To provide more than one scaling factor, separate by adding a comma in between each number. Providing more than one scaling factor presumes there are more than one parameter to be post-processed.
+ If so, \xmlNode{prototypeOutputs}, \xmlNode{targetOutputs}, and \xmlNode{scaleOmega} must have the same number scaling factors.
+ \item \xmlNode{scaleOmega}, \xmlDesc{float or comma separated list of floats, required field}, specifies the agents of change scaling ratio between the feature and target.
+ To provide more than one scaling factor, separate by adding a comma in between each number. Providing more than one scaling factor presumes there are more than one parameter to be post-processed.
+ If so, \xmlNode{prototypeOutputs}, \xmlNode{targetOutputs}, and \xmlNode{scaleBeta} must have the same number scaling factors.
\end{itemize}
-pivot parameter must be named `pivot\_parameter' and this array is assigned within the post-processor algorithm.
\textbf{Example:}
\begin{lstlisting}[style=XML,morekeywords={subType}]
@@ -161,123 +135,263 @@ \subsubsection{Validation PostProcessors}
...
...
-
- outMC1|x1
- outMC2|x2
- dss
- time1
- time2
- DataSynthesis
- 1
- 1
-
- outMC1|x1
- outMC2|x2
+ outMC1|x1,outMC1|y1
+ outMC2|x2,outMC2|y2dsstime1time2
- 0.0|0.5
- 0.0|0.5DataSynthesis
- 1
- 1
-
-
- outMC1|x1
- outMC2|x2
- dss
- time1
- time2
- 0.2475|0.495
- 0.3475|0.695
- DataSynthesis
- 1
- 1
+ 1,1
+ 1,1
...
...
-
- ...
-
-
-
- pivot_parameter
-
-
-
-
-
- pivot_parameter
-
-
-
-
-
- pivot_parameter
-
-
- ...
-
- ...
\end{lstlisting}
-\paragraph{PCM}
-\textbf{PCM} evaluates the uncertainty reduction fraction and obtain posterior distribution of Target
-when using Feature(s) to validate each Target via Physics-guided Coverage Mapping (PCM) method. There are
-three versions of PCM so far: `Static', `Snapshot', and `Tdep'. Static PCM is for static problem, and Snapshot PCM
-and Tdep PCM are for time-dependent problem.
+\paragraph{Representativity}
+The \textbf{Representativity} post-processor is one of three \textbf{Validation} post-processors, in fact there is a
+post-processor interface that acts as a gate for applying these validation algorithms
+(i.e., representativity, Physics-guided Convergence Mapping (PCM), and Dynamic System Scaling (DSS)).
+The post-processor is in charge of deploying a common infrastructure for the user of \textbf{Validation} problems.
+%The usage of this post-processor is three fold. one, to quantitatively assess if a mock/prototype model/experiment
+%form a good representation of a target model. Two, if a set of experiments can represent a target model and can
+%claim a full coverage of the design space and scenarios, and three, if the available set of experiments are not
+%enough to declare coverage what are the remaining experiments required in order to achieve full coverage and
+%increase the representativity/bias factor.
+The representativity theory was first founded in the
+Neutronics community \cite{Gandini, palmiotti1, palmiotti2}, then lately, was transformed to the thermal hydraulics \cite{Epiney1, Epiney2}. So far, several algorithms are implemented within this post-processor:
%
-\ppType{PCM}{PhysicsGuidedCoverageMapping}
+\ppType{Representativity}{Representativity}
%
+\begin{itemize}
+ \item \xmlNode{prototypeOutputs}, \xmlDesc{comma separated string, required field}, specifies the names of the prototypeOutputs, which can be the measuables/observables of the mock model. Reader should be warned that this nomenclature is different than the Machine learning nomenclature.
+
+ \item \xmlNode{targetOutputs}, \xmlDesc{comma separated string, required field}, contains a comma separated list of
+ targetOutputs. These are the Figures of merit (FOMs) in the target model against which the mock model is being validated.
+
+ \item \xmlNode{prototypeParameters}, \xmlDesc{comma separated string, required field}, specifies the names of the parameters/inputs to the mock model.
+
+ \item \xmlNode{targetParameters}, \xmlDesc{comma separated string, required field}, contains a comma separated list of
+ target parameters/inputs.
+
+ \item \xmlNode{pivotParameter}, \xmlDesc{string, optional field}, ID of the temporal variable of the mock model. Default is ``time''.
+ \nb Used just in case the \xmlNode{pivotValue}-based operation is requested (i.e., time dependent validation).
+ \item \xmlNode{targetPivotParameter}, \xmlDesc{string, optional field}, ID of the temporal variable in the target model. Default is ``time''.
+ \nb Used just in case the \xmlNode{pivotValue}-based operation is requested (i.e., time dependent validation).
+\end{itemize}
+
+
+The \textbf{Represntativity} post-processor can make use of the \textbf{Metric} system (See Chapter \ref{sec:Metrics}),
+in conjunction with the specific algorithm chosen from the list above,
+to report validation scores for both static and time-dependent data.
+Indeed, Both \textbf{PointSet} and \textbf{HistorySet} can be accepted by this post-processor.
+If the name of given variable to be compared is unique, it can be used directly, otherwise the variable can be specified
+with $DataObjectName|InputOrOutput|VariableName$ nomenclature.
+
+The \xmlNode{Output} node of the \xmlNode{PointSet} the \xmlNode{Representativity} Postprocessor, accepts outputs like:
+\begin{itemize}
+ \item BiasFactor\_Mock\{prototype output $var_i$ name\}\_Tar\{target output $var_j$ name\}:
+
+ representativity or bias factor of prototype output (measurable) i with respect to target j
+
+ assuming no measurement uncertainty in the mock model
+
+ (i.e., BiasFactor\_MockF1\_TarFOM2).
+
+ \item ExactBiasFactor\_Mock\{prototype output $var_i$ name\}\_Tar\{target output $var_j$ name\}:
+
+ representativity or bias factor of prototype output (measurable) i with respect to target j
+
+ considering measurements uncertainty in the mock model
+
+ (i.e., ExactBiasFactor\_MockF1\_TarFOM2).
+
+ \item CorrectedParameters\_\{parameter output $var_i$ name\}:
+
+ the adjusted/corrected value of parameter i due to the analysis
+
+ (i.e., CorrectedParameters\_p1)
+
+ \item CorrectedTargets\_{output $var_i$ name}:
+
+ the adjusted/corrected value of target i due to the analysis
+
+ (i.e., CorrectedTargets\_FOM1)
+
+ \item VarianceInCorrectedParameter\_\{parameter output $var_i$ name\}:
+
+ variance in corrected parameter i (squared uncertainty)
+
+ (i.e., VarianceInCorrectedParameter\_p1)
+
+ \item CovarianceInCorrectedParameters\_\{parameter $var_i$ name\}\_\{parameter $var_j$ name\}:
+
+ Covariance between parameter i and parameter j
+
+ (i.e., CovarianceInCorrectred\_p1)
+
+ \item CorrectedVar\_Tar\{target $vat_i$ name\}:
+
+ variance in corrected target i
+
+ (i.e., CorrectedVar\_TarFOM1)
+
+ \item ExactCorrectedVar\_Tar\{target $var_i$ name\}:
+
+ exact variance in corrected target i considering uncertainty in measurements in the mock model.
+
+ (i.e., ExactCorrectedVar\_TarFOM1)
+
+ \item CorrectedCov\_Tar\{target $var_i$ name\}\_Tar\{target $var_j$ name\}:
+
+ covariance between corrected target i and corrected target j
+
+ (i.e., CorrectedCov\_TarFOM1)
+
+ \item ExactCorrectedCov\_Tar\{target $var_i$ name\}\_Tar\{target $var_j$ name\}
+
+ exact covariance between corrected target i and corrected target j considering uncertainties in measurements in the mock model.
+
+ (i.e., ExactCorrectedCov\_TarFom1\_TarFOM2)
+
+\nb{all variable names proceeded by 'Exact' takes into account the measurement uncertainties in the mock experiment}
+
+\end{itemize}
+
+\textbf{Example: Representativity}
+\begin{lstlisting}[style=XML,morekeywords={subType}]
+
+ ...
+
+
+ inputPlaceHolder2
+ linModel
+ MC_external
+
+
+
+
+ outputDataMC1
+ outputDataMC2
+ pp1
+
+
+
+
+ ...
+
+ ...
+
+ outputDataMC1|F1, outputDataMC1|F2, outputDataMC1|F3
+ outputDataMC2|F1, outputDataMC2|F2, outputDataMC2|F3
+ outputDataMC1|p1,outputDataMC1|p2
+ outputDataMC2|p1,outputDataMC2|p2
+ outputDataMC1|time
+
+ ...
+
+ ...
+
+ InputPlaceHolder
+
+
+ ...
+
+ \end{lstlisting}
+
+\paragraph{PCM}
+\textbf{PCM} evaluates the uncertainty reduction fraction and obtain posterior distribution of Target
+when using Feature(s) to validate each Target via Physics-guided Coverage Mapping (PCM) method. There are
+three versions of PCM so far: `Static', `Snapshot', and `Tdep'. Static PCM is for static problem, and Snapshot PCM
+and Tdep PCM are for time-dependent problem.
+
\begin{itemize}
\item \xmlNode{pivotParameter}, \xmlDesc{string, optional field}, defaulted as `time', and required by Snapshot and Tdep PCM.
- \item \xmlNode{Features}, \xmlDesc{comma separated string, required field}, specifies the names of the features.
- \item \xmlNode{Targets}, \xmlDesc{comma separated string, required field}, contains a comma separated list of
- targets. \nb Each target will be validated using all features listed in xml node \xmlNode{Features}. The
- number of targets is not necessarily equal to the number of features.
+ \item \xmlNode{prototypeOutputs}, \xmlDesc{comma separated string, required field}, specifies the names of the prototypeOutputs.
+ \item \xmlNode{targetOutputs}, \xmlDesc{comma separated string, required field}, contains a comma separated list of
+ targetOutputs. \nb Each target will be validated using all prototypeOutputs listed in xml node \xmlNode{prototypeOutputs}. The
+ number of targetOutputs is not necessarily equal to the number of prototypeOutputs.
\item \xmlNode{Measurements}, \xmlDesc{comma separated string, required field}, contains a comma separated list of
- measurements of the features. \nb Each measurement correspond to a feature listed in xml node \xmlNode{Features}. The
- number of measurements should be equal to the number of features and in the same order as the features listed in \xmlNode{Features}.
- \item \xmlNode{pcmType}, \xmlDesc{string, required field}, contains the string given by users to choose the version
+ measurements of the features. \nb Each measurement correspond to a feature listed in xml node \xmlNode{prototypeOutputs}. The
+ number of measurements should be equal to the number of prototypeOutputs and in the same order as the prototypeOutputs listed in \xmlNode{prototypeOutputs}.
+ \item \xmlNode{pcmType}, \xmlDesc{string, required field}, contains the string given by users to choose the version
of PCM to be applied. \nb It has three options: `Static', `Snapshot', and `Tdep', corresponding to the three PCM versions.
- \item \xmlNode{ReconstructionError}, \xmlDesc{float, optional field}, contains the value given by users to determind the
+ \item \xmlNode{ReconstructionError}, \xmlDesc{float, optional field}, contains the value given by users to determind the
reconstruction error corresponding to rank of time series data. Default value is 0.001 if not given.
-
\end{itemize}
-The output of Static PCM is comma separated list of strings in the format of ``pri\textunderscore post\textunderscore stdReduct\textunderscore [targetName]'',
-where [targetName] is the $VariableName$ specified in DataObject of \xmlNode{Targets}.
-The output of Snapshot PCM includes two comma separated lists ``time'' and ``snapshot\textunderscore pri\textunderscore post\textunderscore stdReduct'',
-which corresponding to the timesteps and uncertainty reduction fraction of the time-series Target data specified in DataObject of \xmlNode{Targets}.
-The output of Tdep PCM includes three comma separated lists ``time'', ``Tdep\textunderscore post\textunderscore mean'', and ``Error'',
-which corresponding to the timesteps, posterior mean, and error between posterior and prior Target data specified in DataObject of \xmlNode{Targets}.
-
+The output of Static PCM is comma separated list of strings in the format of ``pri\textunderscore post\textunderscore stdReduct\textunderscore [targetName]'',
+where [targetName] is the $VariableName$ specified in DataObject of \xmlNode{targetOutputs}.
+The output of Snapshot PCM includes two comma separated lists ``time'' and ``snapshot\textunderscore pri\textunderscore post\textunderscore stdReduct'',
+which corresponding to the timesteps and uncertainty reduction fraction of the time-series Target data specified in DataObject of \xmlNode{targetOutputs}.
+The output of Tdep PCM includes three comma separated lists ``time'', ``Tdep\textunderscore post\textunderscore mean'', and ``Error'',
+which corresponding to the timesteps, posterior mean, and error between posterior and prior Target data specified in DataObject of \xmlNode{targetOutputs}.
\textbf{Example: Static PCM}
\begin{lstlisting}[style=XML,morekeywords={subType}]
-
- ...
-
+
...
-
- outputDataMC1|F1,outputDataMC1|F2
- outputDataMC2|F2,outputDataMC2|F3,outputDataMC2|F4
- msrData|F1,msrData|F2
-
+
+ ...
+
+ outputDataMC1|F1,outputDataMC1|F2
+ outputDataMC2|F2,outputDataMC2|F3,outputDataMC2|F4
+ msrData|F1,msrData|F2
+
+ ...
+
...
-
- ...
-
+
\end{lstlisting}
\textbf{Example: Snapshot PCM}
@@ -288,8 +402,8 @@ \subsubsection{Validation PostProcessors}
...
time
- exp|TempC
- app|TempD
+ exp|TempC
+ app|TempDmsr|TempMsrCSnapshot
@@ -307,8 +421,8 @@ \subsubsection{Validation PostProcessors}
...
time
- exp|TempC
- app|TempD
+ exp|TempC
+ app|TempDmsr|TempMsrCTdep0.001
diff --git a/doc/user_manual/raven_user_manual.bib b/doc/user_manual/raven_user_manual.bib
index 0766c78752..137aa3ef1a 100644
--- a/doc/user_manual/raven_user_manual.bib
+++ b/doc/user_manual/raven_user_manual.bib
@@ -112,3 +112,50 @@ @TechReport{RAVENtheoryManual
year = {2016},
key = {INL/EXT-16-38178}
}
+
+@book{Gandini,
+ title={Uncertainty analysis and experimental data transposition methods based on perturbation theory},
+ author={Gandini, A},
+ journal={Uncertainty Analysis},
+ pages={217--258},
+ year={1988},
+ publisher={CRC Press, Boca Raton, Fla, USA}
+}
+
+@article{palmiotti1,
+ title={A global approach to the physics validation of simulation codes for future nuclear systems},
+ author={Palmiotti, Giuseppe and Salvatores, Massimo and Aliberti, Gerardo and Hiruta, Hikarui and McKnight, R and Oblozinsky, P and Yang, WS},
+ journal={Annals of Nuclear Energy},
+ volume={36},
+ number={3},
+ pages={355--361},
+ year={2009},
+ publisher={Elsevier}
+}
+
+@article{palmiotti2,
+ title={The role of experiments and of sensitivity analysis in simulation validation strategies with emphasis on reactor physics},
+ author={Palmiotti, Giuseppe and Salvatores, Massimo},
+ journal={Annals of Nuclear Energy},
+ volume={52},
+ pages={10--21},
+ year={2013},
+ publisher={Elsevier}
+}
+
+@article{Epiney1,
+ title={A Systematic Approach to Inform Experiment Design Through Modern Modeling and Simulation Methods},
+ author={Epiney, A and Rabiti, C and Davis, C},
+ journal={Proc. 18th Int. Topl. Mtg. on Nuclear Reactor Thermal Hydraulics (NURETH-18)},
+ year={2019}
+}
+
+@inproceedings{Epiney2,
+ title={Representativity Analysis Applied to TREAT Water Loop LOCA Experiment Design},
+ author={Epiney, Aaron S and Woolstenhulme, Nicolas},
+ booktitle={International Conference on Nuclear Engineering},
+ volume={83785},
+ pages={V003T13A055},
+ year={2020},
+ organization={American Society of Mechanical Engineers}
+}
diff --git a/ravenframework/Models/PostProcessors/BasicStatistics.py b/ravenframework/Models/PostProcessors/BasicStatistics.py
index aeceb30297..2c0451d18d 100644
--- a/ravenframework/Models/PostProcessors/BasicStatistics.py
+++ b/ravenframework/Models/PostProcessors/BasicStatistics.py
@@ -27,13 +27,13 @@
#External Modules End-----------------------------------------------------------
#Internal Modules---------------------------------------------------------------
-from .PostProcessorInterface import PostProcessorInterface
+from .PostProcessorReadyInterface import PostProcessorReadyInterface
from ...utils import utils
from ...utils import InputData, InputTypes
from ...utils import mathUtils
#Internal Modules End-----------------------------------------------------------
-class BasicStatistics(PostProcessorInterface):
+class BasicStatistics(PostProcessorReadyInterface):
"""
BasicStatistics filter class. It computes all the most popular statistics
"""
@@ -163,104 +163,47 @@ def __init__(self):
self.sampleSize = None # number of sample size
self.calculations = {}
self.validDataType = ['PointSet', 'HistorySet', 'DataSet'] # The list of accepted types of DataObject
+ self.inputDataObjectName = None # name for input data object
+ self.setInputDataType('xrDataset')
- def inputToInternal(self, currentInp):
+ def inputToInternal(self, inputIn):
"""
- Method to convert an input object into the internal format that is
+ Method to select corresponding data from Data Objects and normalize the ProbabilityWeight of corresponding data
understandable by this pp.
- @ In, currentInp, object, an object that needs to be converted
+ @ In, inputIn, dict, a dictionary that contains the input Data Object information
@ Out, (inputDataset, pbWeights), tuple, the dataset of inputs and the corresponding variable probability weight
"""
- # The BasicStatistics postprocessor only accept DataObjects
- if self.dynamic is None:
- self.dynamic = False
- currentInput = currentInp [-1] if type(currentInp) == list else currentInp
- if len(currentInput) == 0:
- self.raiseAnError(IOError, "In post-processor " +self.name+" the input "+currentInput.name+" is empty.")
-
+ inpVars, outVars, dataSet = inputIn['Data'][0]
pbWeights = None
- if type(currentInput).__name__ == 'tuple':
- # if tuple we check that we already have a dataset
- # and store the probability weights
- if len(currentInput) != 2:
- self.raiseAnError(RuntimeError, "If tuple is sent in, the dataset and the pb weights must be sent in!")
- if type(currentInput[0]).__name__ != 'Dataset' or (currentInput[1] is not None and type(currentInput[1]).__name__ != 'Dataset'):
- self.raiseAnError(RuntimeError, "If tuple is sent in, the elements must be Dataset!")
- if currentInput[1] is not None and 'ProbabilityWeight' in currentInput[1]:
- self.realizationWeight = xr.Dataset()
- self.realizationWeight['ProbabilityWeight'] = currentInput[1]['ProbabilityWeight']
- return currentInput
- # TODO: convert dict to dataset, I think this will be removed when DataSet is used by other entities that
- # are currently using this Basic Statisitics PostProcessor.
- if type(currentInput).__name__ == 'dict':
- if 'targets' not in currentInput.keys():
- self.raiseAnError(IOError, 'Did not find targets in the input dictionary')
- inputDataset = xr.Dataset()
- for var, val in currentInput['targets'].items():
- inputDataset[var] = val
- if 'metadata' in currentInput.keys():
- metadata = currentInput['metadata']
- self.pbPresent = True if 'ProbabilityWeight' in metadata else False
- if self.pbPresent:
- pbWeights = xr.Dataset()
- pbWeights['ProbabilityWeight'] = metadata['ProbabilityWeight']/metadata['ProbabilityWeight'].sum()
- self.realizationWeight = xr.Dataset()
- self.realizationWeight['ProbabilityWeight'] = pbWeights['ProbabilityWeight']
- for target in self.parameters['targets']:
- pbName = 'ProbabilityWeight-' + target
- if pbName in metadata:
- pbWeights[target] = metadata[pbName]/metadata[pbName].sum()
- elif self.pbPresent:
- pbWeights[target] = self.realizationWeight['ProbabilityWeight']
- else:
- self.raiseAWarning('BasicStatistics postprocessor did not detect ProbabilityWeights! Assuming unit weights instead...')
- else:
- self.raiseAWarning('BasicStatistics postprocessor did not detect ProbabilityWeights! Assuming unit weights instead...')
- if 'RAVEN_sample_ID' not in inputDataset.sizes.keys():
- self.raiseAWarning('BasicStatisitics postprocessor did not detect RAVEN_sample_ID! Assuming the first dimension of given data...')
- self.sampleTag = utils.first(inputDataset.sizes.keys())
- return inputDataset, pbWeights
-
- if currentInput.type not in ['PointSet','HistorySet']:
- self.raiseAnError(IOError, self, 'BasicStatistics postprocessor accepts PointSet and HistorySet only! Got ' + currentInput.type)
-
- # extract all required data from input DataObjects, an input dataset is constructed
- dataSet = currentInput.asDataset()
try:
inputDataset = dataSet[self.parameters['targets']]
except KeyError:
missing = [var for var in self.parameters['targets'] if var not in dataSet]
- self.raiseAnError(KeyError, "Variables: '{}' missing from dataset '{}'!".format(", ".join(missing),currentInput.name))
- self.sampleTag = currentInput.sampleTag
+ self.raiseAnError(KeyError, "Variables: '{}' missing from dataset '{}'!".format(", ".join(missing),self.inputDataObjectName))
+ self.sampleTag = 'RAVEN_sample_ID'
- if currentInput.type == 'HistorySet':
+ if self.dynamic:
dims = inputDataset.sizes.keys()
if self.pivotParameter is None:
- if len(dims) > 1:
- self.raiseAnError(IOError, self, 'Time-dependent statistics is requested (HistorySet) but no pivotParameter \
- got inputted!')
+ self.raiseAnError(IOError, self, 'Time-dependent statistics is requested (HistorySet) but no pivotParameter \
+ got inputted!')
elif self.pivotParameter not in dims:
self.raiseAnError(IOError, self, 'Pivot parameter', self.pivotParameter, 'is not the associated index for \
requested variables', ','.join(self.parameters['targets']))
- else:
- self.dynamic = True
- if not currentInput.checkIndexAlignment(indexesToCheck=self.pivotParameter):
- self.raiseAnError(IOError, "The data provided by the data objects", currentInput.name, "is not synchronized!")
- self.pivotValue = inputDataset[self.pivotParameter].values
- if self.pivotValue.size != len(inputDataset.groupby(self.pivotParameter)):
- msg = "Duplicated values were identified in pivot parameter, please use the 'HistorySetSync'" + \
- " PostProcessor to syncronize your data before running 'BasicStatistics' PostProcessor."
- self.raiseAnError(IOError, msg)
+ self.pivotValue = dataSet[self.pivotParameter].values
+ if self.pivotValue.size != len(dataSet.groupby(self.pivotParameter)):
+ msg = "Duplicated values were identified in pivot parameter, please use the 'HistorySetSync'" + \
+ " PostProcessor to syncronize your data before running 'BasicStatistics' PostProcessor."
+ self.raiseAnError(IOError, msg)
# extract all required meta data
- metaVars = currentInput.getVars('meta')
- self.pbPresent = True if 'ProbabilityWeight' in metaVars else False
+ self.pbPresent = 'ProbabilityWeight' in dataSet
if self.pbPresent:
pbWeights = xr.Dataset()
self.realizationWeight = dataSet[['ProbabilityWeight']]/dataSet[['ProbabilityWeight']].sum()
pbWeights['ProbabilityWeight'] = self.realizationWeight['ProbabilityWeight']
for target in self.parameters['targets']:
pbName = 'ProbabilityWeight-' + target
- if pbName in metaVars:
+ if pbName in dataSet:
pbWeights[target] = dataSet[pbName]/dataSet[pbName].sum()
elif self.pbPresent:
pbWeights[target] = self.realizationWeight['ProbabilityWeight']
@@ -269,6 +212,18 @@ def inputToInternal(self, currentInp):
return inputDataset, pbWeights
+
+ def resetProbabilityWeight(self, pbWeights):
+ """
+ Reset probability weight using given pbWeights
+ @ In, pbWeights, xr.Dataset, dataset contains probability weights and
+ variable probability weight
+ @ Out, None
+ """
+ if 'ProbabilityWeight' in pbWeights:
+ self.realizationWeight = xr.Dataset()
+ self.realizationWeight['ProbabilityWeight'] = pbWeights['ProbabilityWeight']
+
def initialize(self, runInfo, inputs, initDict):
"""
Method to initialize the BasicStatistic pp. In here the working dir is
@@ -278,6 +233,9 @@ def initialize(self, runInfo, inputs, initDict):
@ In, initDict, dict, dictionary with initialization options
@ Out, None
"""
+ if len(inputs)>1:
+ self.raiseAnError(IOError, 'Post-Processor', self.name, 'accepts only one DataObject')
+ self.inputDataObjectName = inputs[-1].name
#construct a list of all the parameters that have requested values into self.allUsedParams
self.allUsedParams = set()
for metricName in self.scalarVals + self.vectorVals:
@@ -295,6 +253,8 @@ def initialize(self, runInfo, inputs, initDict):
inputObj = inputs[-1] if type(inputs) == list else inputs
if inputObj.type == 'HistorySet':
self.dynamic = True
+ if not inputObj.checkIndexAlignment(indexesToCheck=self.pivotParameter):
+ self.raiseAnError(IOError, "The data provided by the input data object is not synchronized!")
inputMetaKeys = []
outputMetaKeys = []
for metric, infos in self.toDo.items():
@@ -1558,6 +1518,21 @@ def spearmanCorrelation(self, featVars, targVars, featSamples, targSamples, pbWe
da = xr.DataArray(spearmanMat, dims=('targets','features'), coords={'targets':targVars,'features':featVars})
return da
+ def _runLegacy(self, inputIn):
+ """
+ This method executes the postprocessor action with the old data format. In this case, it computes all the requested statistical FOMs
+ @ In, inputIn, object, object contained the data to process. (inputToInternal output)
+ @ Out, outputSet, xarray.Dataset or dictionary, dataset or dictionary containing the results
+ """
+ if type(inputIn).__name__ == 'PointSet':
+ merged = inputIn.asDataset()
+ elif 'metadata' in inputIn:
+ merged = xr.merge([inputIn['metadata'],inputIn['targets']])
+ else:
+ merged = xr.merge([inputIn['targets']])
+ newInputIn = {'Data':[[None,None,merged]]}
+ return self.run(newInputIn)
+
def run(self, inputIn):
"""
This method executes the postprocessor action. In this case, it computes all the requested statistical FOMs
diff --git a/ravenframework/Models/PostProcessors/Factory.py b/ravenframework/Models/PostProcessors/Factory.py
index eb3a991d4a..f3498d9670 100644
--- a/ravenframework/Models/PostProcessors/Factory.py
+++ b/ravenframework/Models/PostProcessors/Factory.py
@@ -38,6 +38,7 @@
from .EconomicRatio import EconomicRatio
from .ValidationBase import ValidationBase
from .Validations import Probabilistic
+from .Validations import Representativity
from .Validations import PPDSS
from .Validations import PhysicsGuidedCoverageMapping
from .TSACharacterizer import TSACharacterizer
diff --git a/ravenframework/Models/PostProcessors/LimitSurfaceIntegral.py b/ravenframework/Models/PostProcessors/LimitSurfaceIntegral.py
index 0eb2e8402a..e2b7ed7389 100644
--- a/ravenframework/Models/PostProcessors/LimitSurfaceIntegral.py
+++ b/ravenframework/Models/PostProcessors/LimitSurfaceIntegral.py
@@ -255,9 +255,9 @@ def run(self, input):
else:
randomMatrix[:, index] = self.variableDist[varName].ppf(randomMatrix[:, index]) # previously used np.vectorize in the calculation, but this is faster with scipy distributions
tempDict[varName] = randomMatrix[:, index]
- pb = self.stat.run({'targets':{self.target:xarray.DataArray(self.functionS.evaluate(tempDict)[self.target])}})[self.computationPrefix +"_"+self.target]
+ pb = self.stat._runLegacy({'targets':{self.target:xarray.DataArray(self.functionS.evaluate(tempDict)[self.target], dims=self.sampleTag)}})[self.computationPrefix +"_"+self.target]
if self.errorModel:
- boundError = abs(pb-self.stat.run({'targets':{self.target:xarray.DataArray(self.errorModel.evaluate(tempDict)[self.target])}})[self.computationPrefix +"_"+self.target])
+ boundError = abs(pb-self.stat._runLegacy({'targets':{self.target:xarray.DataArray(self.errorModel.evaluate(tempDict)[self.target], dims=self.sampleTag)}})[self.computationPrefix +"_"+self.target])
else:
self.raiseAnError(NotImplemented, "quadrature not yet implemented")
return pb, boundError
diff --git a/ravenframework/Models/PostProcessors/PostProcessorInterface.py b/ravenframework/Models/PostProcessors/PostProcessorInterface.py
index ed21f8cd3f..bc6574bb78 100644
--- a/ravenframework/Models/PostProcessors/PostProcessorInterface.py
+++ b/ravenframework/Models/PostProcessors/PostProcessorInterface.py
@@ -67,6 +67,7 @@ def __init__(self):
## One possible solution is all postpocessors return a list of realizations, and we only
## use addRealization method to add the collections into the DataObjects
self.outputMultipleRealizations = False
+ self.sampleTag = 'RAVEN_sample_ID' # raven sample tag used to store data
def _handleInput(self, paramInput):
"""
diff --git a/ravenframework/Models/PostProcessors/SafestPoint.py b/ravenframework/Models/PostProcessors/SafestPoint.py
index 13ba857e2f..45fdb471c4 100644
--- a/ravenframework/Models/PostProcessors/SafestPoint.py
+++ b/ravenframework/Models/PostProcessors/SafestPoint.py
@@ -334,8 +334,8 @@ def run(self, input):
rlz[self.outputName][ncLine] = np.prod(probList)
rlz['ProbabilityWeight'][ncLine] = np.prod(probList)
metadata = {'ProbabilityWeight':xarray.DataArray(rlz['ProbabilityWeight'])}
- targets = {tar:xarray.DataArray( rlz[tar]) for tar in self.controllableOrd}
- rlz['ExpectedSafestPointCoordinates'] = self.stat.run({'metadata':metadata, 'targets':targets})
+ targets = {tar:xarray.DataArray( rlz[tar], dims=self.sampleTag) for tar in self.controllableOrd}
+ rlz['ExpectedSafestPointCoordinates'] = self.stat._runLegacy({'metadata':metadata, 'targets':targets})
self.raiseADebug(rlz['ExpectedSafestPointCoordinates'])
return rlz
diff --git a/ravenframework/Models/PostProcessors/SubdomainBasicStatistics.py b/ravenframework/Models/PostProcessors/SubdomainBasicStatistics.py
index c5c5d09aa2..cdb3f7ac40 100644
--- a/ravenframework/Models/PostProcessors/SubdomainBasicStatistics.py
+++ b/ravenframework/Models/PostProcessors/SubdomainBasicStatistics.py
@@ -21,13 +21,12 @@
#External Modules End-----------------------------------------------------------
#Internal Modules---------------------------------------------------------------
-from .PostProcessorInterface import PostProcessorInterface
+from .PostProcessorReadyInterface import PostProcessorReadyInterface
from .BasicStatistics import BasicStatistics
-from ...utils import utils
from ...utils import InputData, InputTypes
#Internal Modules End-----------------------------------------------------------
-class SubdomainBasicStatistics(PostProcessorInterface):
+class SubdomainBasicStatistics(PostProcessorReadyInterface):
"""
Subdomain basic statitistics class. It computes all statistics on subdomains
"""
@@ -76,6 +75,9 @@ def __init__(self):
self.validDataType = ['PointSet', 'HistorySet', 'DataSet']
self.outputMultipleRealizations = True
self.printTag = 'PostProcessor SUBDOMAIN STATISTICS'
+ self.inputDataObjectName = None # name for input data object
+ self.setInputDataType('xrDataset')
+ self.sampleTag = 'RAVEN_sample_ID'
def inputToInternal(self, currentInp):
"""
@@ -88,15 +90,12 @@ def inputToInternal(self, currentInp):
cellIDs = self.gridEntity.returnCellIdsWithCoordinates()
dimensionNames = self.gridEntity.returnParameter('dimensionNames')
self.dynamic = False
- currentInput = currentInp [-1] if type(currentInp) == list else currentInp
- if len(currentInput) == 0:
- self.raiseAnError(IOError, "In post-processor " +self.name+" the input "+currentInput.name+" is empty.")
- if currentInput.type not in ['PointSet','HistorySet']:
- self.raiseAnError(IOError, self, 'This Postprocessor accepts PointSet and HistorySet only! Got ' + currentInput.type)
# extract all required data from input DataObjects, an input dataset is constructed
- dataSet = currentInput.asDataset()
- processedDataSet, pbWeights = self.stat.inputToInternal(currentInput)
+ inpVars, outVars, dataSet = currentInp['Data'][0]
+ processedDataSet, pbWeights = self.stat.inputToInternal(currentInp)
+ self.sampleSize = dataSet.sizes[self.sampleTag]
+
for cellId, verteces in cellIDs.items():
# create masks
maskDataset = None
@@ -118,9 +117,9 @@ def inputToInternal(self, currentInp):
# check if at least sample is available (for scalar quantities) and at least 2 samples for derivative quantities
setWhat = set(self.stat.what)
minimumNumberOfSamples = 2 if len(setWhat.intersection(set(self.stat.vectorVals))) > 0 else 1
- if len(cellDataset[currentInput.sampleTag]) < minimumNumberOfSamples:
+ if self.sampleSize < minimumNumberOfSamples:
self.raiseAnError(RuntimeError,"Number of samples in cell "
- f"{cellId} < {minimumNumberOfSamples}. Found {len(cellDataset[currentInput.sampleTag])}"
+ f"{cellId} < {minimumNumberOfSamples}. Found {self.sampleSize}"
" samples within the cell. Please make the evaluation grid coarser or increase number of samples!")
# store datasets
@@ -172,7 +171,8 @@ def run(self, inputIn):
midPoint = self.gridEntity.returnCellsMidPoints(returnDict=True)
firstPass = True
for i, (cellId, data) in enumerate(inputData.items()):
- cellData = self.stat.inputToInternal(data)
+ cellData = data
+ self.stat.resetProbabilityWeight(data[1])
res = self.stat._runLocal(cellData)
for k in res:
if firstPass:
@@ -185,8 +185,9 @@ def run(self, inputIn):
results[k][i] = np.atleast_1d(midPoint[cellId][k])
firstPass = False
outputRealization['data'] = results
+ indexes = inputIn['Data'][0][-1].indexes
if self.stat.dynamic:
- dims = dict.fromkeys(results.keys(), inputIn[-1].indexes if type(inputIn) == list else inputIn.indexes)
+ dims = dict.fromkeys(results.keys(), indexes)
for k in list(midPoint.values())[0]:
dims[k] = []
outputRealization['dims'] = dims
diff --git a/ravenframework/Models/PostProcessors/ValidationBase.py b/ravenframework/Models/PostProcessors/ValidationBase.py
index 87948f36ce..121bd0251d 100644
--- a/ravenframework/Models/PostProcessors/ValidationBase.py
+++ b/ravenframework/Models/PostProcessors/ValidationBase.py
@@ -58,9 +58,9 @@ class cls.
specs.addSub(preProcessorInput)
pivotParameterInput = InputData.parameterInputFactory("pivotParameter", contentType=InputTypes.StringType)
specs.addSub(pivotParameterInput)
- featuresInput = InputData.parameterInputFactory("Features", contentType=InputTypes.StringListType)
+ featuresInput = InputData.parameterInputFactory("prototypeOutputs", contentType=InputTypes.StringListType)
specs.addSub(featuresInput)
- targetsInput = InputData.parameterInputFactory("Targets", contentType=InputTypes.StringListType)
+ targetsInput = InputData.parameterInputFactory("targetOutputs", contentType=InputTypes.StringListType)
specs.addSub(targetsInput)
metricInput = InputData.parameterInputFactory("Metric", contentType=InputTypes.StringType)
metricInput.addParam("class", InputTypes.StringType)
@@ -85,8 +85,8 @@ def __init__(self):
self.dataType = ['static', 'dynamic'] # the type of data can be passed in (static aka PointSet, dynamic aka HistorySet) (if both are present the validation algorithm can work for both data types)
self.acceptableMetrics = [] # if not populated all types of metrics are accepted, otherwise list the metrics (see Probablistic.py for an example)
- self.features = None # list of feature variables
- self.targets = None # list of target variables
+ self.prototypeOutputs = None # list of feature variables
+ self.targetOutputs = None # list of target variables
self.pivotValues = None # pivot values (present if dynamic == True)
self.addAssemblerObject('Metric', InputData.Quantity.zero_to_infinity)
@@ -126,14 +126,14 @@ def _handleInput(self, paramInput):
for child in paramInput.subparts:
if child.getName() == 'pivotParameter':
self.pivotParameter = child.value
- elif child.getName() == 'Features':
- self.features = child.value
- elif child.getName() == 'Targets':
- self.targets = child.value
+ elif child.getName() == 'prototypeOutputs':
+ self.prototypeOutputs = child.value
+ elif child.getName() == 'targetOutputs':
+ self.targetOutputs = child.value
if 'static' not in self.dataType and self.pivotParameter is None:
self.raiseAnError(IOError, "The validation algorithm '{}' is a dynamic model ONLY but no node has been inputted".format(self._type))
- if not self.features:
- self.raiseAnError(IOError, "XML node 'Features' is required but not provided")
+ if not self.prototypeOutputs:
+ self.raiseAnError(IOError, "XML node 'prototypeOutputs' is required but not provided")
def initialize(self, runInfo, inputs, initDict):
"""
@@ -152,20 +152,20 @@ def initialize(self, runInfo, inputs, initDict):
if len(inputs) > 1:
# if inputs > 1, check if the | is present to understand where to get the features and target
- notStandard = [k for k in self.features + self.targets if "|" not in k]
+ notStandard = [k for k in self.prototypeOutputs + self.targetOutputs if "|" not in k]
if notStandard:
self.raiseAnError(IOError, "# Input Datasets/DataObjects > 1! features and targets must use the syntax DataObjectName|feature to be usable! Not standard features are: {}!".format(",".join(notStandard)))
# now lets check that the variables are in the dataobjects
if isinstance(inputs[0], DataObjects.DataSet):
do = [inp.name for inp in inputs]
if len(inputs) > 1:
- allFound = [feat.split("|")[0].strip() in do for feat in self.features]
- allFound += [targ.split("|")[0].strip() in do for targ in self.targets]
+ allFound = [feat.split("|")[0].strip() in do for feat in self.prototypeOutputs]
+ allFound += [targ.split("|")[0].strip() in do for targ in self.targetOutputs]
if not all(allFound):
- self.raiseAnError(IOError, "Targets and Features are linked to DataObjects that have not been listed as inputs in the Step. Please check input!")
+ self.raiseAnError(IOError, "targetParameters and prototypeParameters are linked to DataObjects that have not been listed as inputs in the Step. Please check input!")
# check variables
for indx, dobj in enumerate(do):
- variables = [var.split("|")[-1].strip() for var in (self.features + self.targets) if dobj in var]
+ variables = [var.split("|")[-1].strip() for var in (self.prototypeOutputs + self.targetOutputs) if dobj in var]
if not utils.isASubset(variables,inputs[indx].getVars()):
self.raiseAnError(IOError, "The variables '{}' not found in input DataObjet '{}'!".format(",".join(list(set(list(inputs[indx].getVars())) - set(variables))), dobj))
@@ -186,10 +186,12 @@ def _getDataFromDataDict(self, datasets, var, names=None):
"""
pw = None
if "|" in var and names is not None:
- do, feat = var.split("|")
+ info = var.split("|")
+ do = info[0]
+ feat = info[-1]
dat = datasets[do][feat]
else:
- for doIndex, ds in enumerate(datasets):
+ for _, ds in enumerate(datasets):
if var in ds:
dat = ds[var]
break
diff --git a/ravenframework/Models/PostProcessors/Validations/PPDSS.py b/ravenframework/Models/PostProcessors/Validations/PPDSS.py
index e5c395c5af..5639b9d084 100644
--- a/ravenframework/Models/PostProcessors/Validations/PPDSS.py
+++ b/ravenframework/Models/PostProcessors/Validations/PPDSS.py
@@ -96,8 +96,8 @@ def __init__(self):
self.name = 'PPDSS' # Postprocessor name
self.dynamic = True # Must be time-dependent?
self.dynamicType = ['dynamic'] # Specification of dynamic type
- self.features = None # list of feature variables
- self.targets = None # list of target variables
+ self.prototypeOutputs = None # list of feature variables
+ self.targetOutputs = None # list of target variables
self.multiOutput = 'raw_values' # defines aggregating of multiple outputs for HistorySet
# currently allow raw_values
self.pivotParameterFeature = None # Feature pivot parameter variable
@@ -124,10 +124,10 @@ def _handleInput(self, paramInput):
if child.getName() == 'Metric':
if 'type' not in child.parameterValues.keys() or 'class' not in child.parameterValues.keys():
self.raiseAnError(IOError, 'Tag Metric must have attributes "class" and "type"')
- elif child.getName() == 'Features':
- self.features = child.value
- elif child.getName() == 'Targets':
- self.targets = child.value
+ elif child.getName() == 'prototypeOutputs':
+ self.prototypeOutputs = child.value
+ elif child.getName() == 'targetOutputs':
+ self.targetOutputs = child.value
elif child.getName() == 'multiOutput':
self.multiOutput = child.value
elif child.getName() == 'pivotParameterFeature':
@@ -192,10 +192,10 @@ def _evaluate(self, datasets, **kwargs):
"""
realizations = []
realizationArray = []
- if len(self.features) > 1 or len(self.targets) > 1:
+ if len(self.prototypeOutputs) > 1 or len(self.targetOutputs) > 1:
self.raiseAnError(IOError, "The number of inputs for features or targets is greater than 1. Please restrict to one set per step.")
- feat = self.features[0]
- targ = self.targets[0]
+ feat = self.prototypeOutputs[0]
+ targ = self.targetOutputs[0]
scaleRatioBeta = self.scaleRatioBeta
scaleRatioOmega = self.scaleRatioOmega
nameFeat = feat.split("|")
diff --git a/ravenframework/Models/PostProcessors/Validations/PhysicsGuidedCoverageMapping.py b/ravenframework/Models/PostProcessors/Validations/PhysicsGuidedCoverageMapping.py
index 639c03939c..113540542c 100644
--- a/ravenframework/Models/PostProcessors/Validations/PhysicsGuidedCoverageMapping.py
+++ b/ravenframework/Models/PostProcessors/Validations/PhysicsGuidedCoverageMapping.py
@@ -92,7 +92,7 @@ def _handleInput(self, paramInput):
self.ReconstructionError = 0.001
# Number of Features responses must equal to number of Measurements responses
# Number of samples between Features and Measurements can be different
- if len(self.features) != len(self.measurements):
+ if len(self.prototypeOutputs) != len(self.measurements):
self.raiseAnError(IOError, 'The number of variables found in XML node "Features" is not equal the number of variables found in XML node "Measurements"')
def run(self, inputIn):
@@ -383,7 +383,7 @@ def pcmTdep(featData, msrData, targData, recError):
featPW = []
msrPW = []
- for feat, msr, targ in zip(self.features, self.measurements, self.targets):
+ for feat, msr, targ in zip(self.prototypeOutputs, self.measurements, self.targetOutputs):
featDataProb = self._getDataFromDataDict(datasets, feat, names)
msrDataProb = self._getDataFromDataDict(datasets, msr, names)
# read targets' data
@@ -410,7 +410,6 @@ def pcmTdep(featData, msrData, targData, recError):
pcmVersion = self.pcmType
recError = self.ReconstructionError #reconstruction error to determine the rank of time series data.
-
if pcmVersion == 'Tdep':
self.raiseAMessage('*** Running Tdep-PCM ***')
# Data of size (num_of_samples, num_of_features)
@@ -466,7 +465,7 @@ def pcmTdep(featData, msrData, targData, recError):
msrData = np.array(msrData).T
targData = np.array(targData).T
outputArray = PCM(featData, msrData, targData)
- for targ in self.targets:
+ for targ in self.targetOutputs:
name = "static_pri_post_stdReduct_" + targ.split('|')[-1]
outputDict[name] = np.asarray(outputArray)
diff --git a/ravenframework/Models/PostProcessors/Validations/Probabilistic.py b/ravenframework/Models/PostProcessors/Validations/Probabilistic.py
index 7f6bdc637e..a123a32968 100644
--- a/ravenframework/Models/PostProcessors/Validations/Probabilistic.py
+++ b/ravenframework/Models/PostProcessors/Validations/Probabilistic.py
@@ -15,10 +15,6 @@
Created on April 04, 2021
@author: alfoa
-
- This class represents a base class for the validation algorithms
- It inherits from the PostProcessor directly
- ##TODO: Recast it once the new PostProcesso API gets in place
"""
#External Modules------------------------------------------------------------------------------------
@@ -106,7 +102,7 @@ def _evaluate(self, datasets, **kwargs):
"""
names = kwargs.get('dataobjectNames')
outputDict = {}
- for feat, targ in zip(self.features, self.targets):
+ for feat, targ in zip(self.prototypeOutputs, self.targetOutputs):
featData = self._getDataFromDataDict(datasets, feat, names)
targData = self._getDataFromDataDict(datasets, targ, names)
for metric in self.metrics:
@@ -124,7 +120,7 @@ def _getDataFromDataDict(self, datasets, var, names=None):
"""
pw = None
if "|" in var and names is not None:
- do, feat = var.split("|")
+ do, _, feat = var.split("|")
dat = datasets[do][feat]
else:
for doIndex, ds in enumerate(datasets):
@@ -142,4 +138,4 @@ def _getDataFromDataDict(self, datasets, var, names=None):
# the following reshaping does not require a copy
dat.shape = (dat.shape[0], 1)
data = dat, pw
- return data
+ return data
\ No newline at end of file
diff --git a/ravenframework/Models/PostProcessors/Validations/Representativity.py b/ravenframework/Models/PostProcessors/Validations/Representativity.py
new file mode 100644
index 0000000000..293ea47d11
--- /dev/null
+++ b/ravenframework/Models/PostProcessors/Validations/Representativity.py
@@ -0,0 +1,466 @@
+# Copyright 2017 Battelle Energy Alliance, LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+ Created on April 04, 2021
+
+ @ Authors: Mohammad Abdo (@Jimmy-INL)
+ Congjian Wang (@wangcj05)
+ Andrea Alfonsi (@aalfonsi)
+"""
+
+#External Modules------------------------------------------------------------------------------------
+import numpy as np
+import xarray as xr
+import scipy as sp
+from scipy.linalg import sqrtm
+#External Modules End--------------------------------------------------------------------------------
+
+#Internal Modules------------------------------------------------------------------------------------
+from ravenframework.utils import InputData, InputTypes
+from .. import ValidationBase
+#Internal Modules End--------------------------------------------------------------------------------
+
+class Representativity(ValidationBase):
+ """
+ Representativity is a base class for validation problems
+ It represents the base class for most validation problems
+ """
+
+ @classmethod
+ def getInputSpecification(cls):
+ """
+ Method to get a reference to a class that specifies the input data for
+ class cls.
+ @ In, cls, the class for which we are retrieving the specification
+ @ Out, specs, InputData.ParameterInput, class to use for
+ specifying input of cls.
+ """
+ specs = super(Representativity, cls).getInputSpecification()
+ prototypeParameters = InputData.parameterInputFactory("prototypeParameters", contentType=InputTypes.StringListType,
+ descr=r"""mock model parameters/inputs""")
+ prototypeParameters.addParam("type", InputTypes.StringType)
+ specs.addSub(prototypeParameters)
+ targetParameters = InputData.parameterInputFactory("targetParameters", contentType=InputTypes.StringListType,
+ descr=r"""Target model parameters/inputs""")
+ specs.addSub(targetParameters)
+ targetPivotParameterInput = InputData.parameterInputFactory("targetPivotParameter", contentType=InputTypes.StringType,
+ descr=r"""ID of the temporal variable of the target model. Default is ``time''.
+ \nb Used just in case the \xmlNode{pivotValue}-based operation is requested (i.e., time dependent validation).""")
+ specs.addSub(targetPivotParameterInput)
+ return specs
+
+ def __init__(self):
+ """
+ Constructor
+ @ In, None
+ @ Out, None
+ """
+ super().__init__()
+ self.printTag = 'POSTPROCESSOR Representativity'
+ self.dynamicType = ['static'] # for now only static is available
+ self.name = 'Representativity'
+ self.stat = [None, None]
+ self.featureDataObject = None
+ self.targetDataObject = None
+ self.senPrefix = 'sen'
+
+ def getBasicStat(self):
+ """
+ Get Basic Statistic PostProcessor
+ @ In, None
+ @ Out, stat, object, Basic Statistic PostProcessor Object
+ """
+ from .. import factory as ppFactory # delay import to allow definition
+ stat = ppFactory.returnInstance('BasicStatistics')
+ stat.what = ['NormalizedSensitivities'] # expected value calculation
+ return stat
+
+ def initialize(self, runInfo, inputs, initDict):
+ """
+ Method to initialize the DataMining pp.
+ @ In, runInfo, dict, dictionary of run info (e.g. working dir, etc)
+ @ In, inputs, list, list of inputs
+ @ In, initDict, dict, dictionary with initialization options
+ @ Out, None
+ """
+ super().initialize(runInfo, inputs, initDict)
+ if len(inputs) != 2:
+ self.raiseAnError(IOError, "PostProcessor", self.name, "can only accept two DataObjects, but got {}!".format(str(len(inputs))))
+ params = self.prototypeOutputs+self.targetOutputs+self.prototypeParameters+self.targetParameters
+ validParams = [True if "|" in x else False for x in params]
+ if not all(validParams):
+ notValid = list(np.asarray(params)[np.where(np.asarray(validParams)==False)[0]])
+ self.raiseAnError(IOError, "'prototypeParameters', 'targetParameters', 'prototypeOutputs', and 'targetOutputs' should use 'DataObjectName|Input or Output|variable' format, but variables {} do not follow this rule.".format(','.join(notValid)))
+ # Assume features and targets are in the format of: DataObjectName|Variables
+ names = set([x.split("|")[0] for x in self.prototypeOutputs] + [x.split("|")[0] for x in self.prototypeParameters])
+ if len(names) != 1:
+ self.raiseAnError(IOError, "'prototypeOutputs' and 'prototypeParameters' should come from the same DataObjects, but they present in differet DataObjects:{}".fortmat(','.join(names)))
+ featDataObject = list(names)[0]
+ names = set([x.split("|")[0] for x in self.targetOutputs] + [x.split("|")[0] for x in self.targetParameters])
+ if len(names) != 1:
+ self.raiseAnError(IOError, "'targetOutputs' and 'targetParameters' should come from the same DataObjects, but they present in differet DataObjects:{}".fortmat(','.join(names)))
+ targetDataObject = list(names)[0]
+ featVars = [x.split("|")[-1] for x in self.prototypeOutputs] + [x.split("|")[-1] for x in self.prototypeParameters]
+ targVars = [x.split("|")[-1] for x in self.targetOutputs] + [x.split("|")[-1] for x in self.targetParameters]
+
+ for i, inp in enumerate(inputs):
+ if inp.name == featDataObject:
+ self.featureDataObject = (inp, i)
+ else:
+ self.targetDataObject = (inp, i)
+
+ vars = self.featureDataObject[0].vars + self.featureDataObject[0].indexes
+ if not set(featVars).issubset(set(vars)):
+ missing = featVars - set(vars)
+ self.raiseAnError(IOError, "Variables {} are missing from DataObject {}".format(','.join(missing), self.featureDataObject[0].name))
+ vars = self.targetDataObject[0].vars + self.targetDataObject[0].indexes
+ if not set(targVars).issubset(set(vars)):
+ missing = targVars - set(vars)
+ self.raiseAnError(IOError, "Variables {} are missing from DataObject {}".format(','.join(missing), self.targetDataObject[0].name))
+
+ featStat = self.getBasicStat()
+ featStat.toDo = {'sensitivity':[{'targets':set([x.split("|")[-1] for x in self.prototypeOutputs]), 'features':set([x.split("|")[-1] for x in self.prototypeParameters]),'prefix':self.senPrefix}]}
+ featStat.initialize(runInfo, [self.featureDataObject[0]], initDict)
+ self.stat[self.featureDataObject[-1]] = featStat
+ tartStat = self.getBasicStat()
+ tartStat.toDo = {'sensitivity':[{'targets':set([x.split("|")[-1] for x in self.targetOutputs]), 'features':set([x.split("|")[-1] for x in self.targetParameters]),'prefix':self.senPrefix}]}
+ tartStat.initialize(runInfo, [self.targetDataObject[0]], initDict)
+ self.stat[self.targetDataObject[-1]] = tartStat
+
+
+ def _handleInput(self, paramInput):
+ """
+ Function to handle the parsed paramInput for this class.
+ @ In, paramInput, ParameterInput, the already parsed input.
+ @ Out, None
+ """
+ super()._handleInput(paramInput)
+ for child in paramInput.subparts:
+ if child.getName() == 'prototypeParameters':
+ self.prototypeParameters = child.value
+ elif child.getName() == 'targetParameters':
+ self.targetParameters = child.value
+ elif child.getName() == 'targetPivotParameter':
+ self.targetPivotParameter = child.value
+ _, notFound = paramInput.findNodesAndExtractValues(['prototypeParameters',
+ 'targetParameters'])
+ # notFound must be empty
+ assert(not notFound)
+
+ def run(self, inputIn):
+ """
+ This method executes the postprocessor action. In this case it computes representativity/bias factors, corrected data, etc.
+
+ @ In, inputIn, dictionary of data to process
+ @ Out, evaluation, dict, dictionary containing the post-processed results
+ """
+ dataSets = [data for _, _, data in inputIn['Data']]
+ pivotParameter = self.pivotParameter
+ names=[]
+ if isinstance(inputIn['Data'][0][-1], xr.Dataset):
+ names = [self.getDataSetName(inp[-1]) for inp in inputIn['Data']]
+ if len(inputIn['Data'][0][-1].indexes) > 1 and self.pivotParameter is None:
+ if 'dynamic' not in self.dynamicType: #self.model.dataType:
+ self.raiseAnError(IOError, "The validation algorithm '{}' is not a dynamic model but time-dependent data has been inputted in object {}".format(self._type, inputIn['Data'][0][-1].name))
+ else:
+ pivotParameter = self.pivotParameter
+ evaluation ={k: np.atleast_1d(val) for k, val in self._evaluate(dataSets, **{'dataobjectNames': names}).items()}
+
+ ## TODO: This is a placeholder to remember the time dependent case
+ # if pivotParameter:
+ # # Uncomment this to cause crash: print(dataSets[0], pivotParameter)
+ # if len(dataSets[0][pivotParameter]) != len(list(evaluation.values())[0]):
+ # self.raiseAnError(RuntimeError, "The pivotParameter value '{}' has size '{}' and validation output has size '{}'".format( len(dataSets[0][self.pivotParameter]), len(evaluation.values()[0])))
+ # if pivotParameter not in evaluation:
+ # evaluation[pivotParameter] = dataSets[0][pivotParameter]
+ return evaluation
+
+ def _evaluate(self, datasets, **kwargs):
+ """
+ Main method to "do what you do".
+ @ In, datasets, list, list of datasets (data1,data2,etc.) to used.
+ @ In, kwargs, dict, keyword arguments
+ @ Out, outputDict, dict, dictionary containing the results {"feat"_"target"_"metric_name":value}
+ """
+ # # ## Analysis:
+ # # 1. Compute mean and variance:
+ # For mock model
+ datasets[0] = self._computeMoments(datasets[0], self.prototypeParameters, self.prototypeOutputs)
+ measurableNames = [s.split("|")[-1] for s in self.prototypeOutputs]
+ measurables = [datasets[0][var].meanValue for var in measurableNames]
+ # For target model
+ datasets[1] = self._computeMoments(datasets[1], self.targetParameters, self.targetOutputs)
+ FOMNames = [s.split("|")[-1] for s in self.targetOutputs]
+ FOMs = np.atleast_2d([datasets[1][var].meanValue for var in FOMNames]).reshape(-1,1)
+ # # 2. Propagate error from parameters to experiment and target outputs.
+ # For mock model
+ datasets[0] = self._computeErrors(datasets[0],self.prototypeParameters, self.prototypeOutputs)
+ measurableErrorNames = ['err_' + s.split("|")[-1] for s in self.prototypeOutputs]
+ FOMErrorNames = ['err_' + s.split("|")[-1] for s in self.targetOutputs]
+ datasets[0] = self._computeMoments(datasets[0], measurableErrorNames, measurableErrorNames)
+ UMeasurables = np.atleast_2d([datasets[0][var].meanValue for var in measurableErrorNames]).reshape(-1,1)
+ # For target model
+ datasets[1] = self._computeErrors(datasets[1],self.targetParameters, self.targetOutputs)
+ datasets[1] = self._computeMoments(datasets[1], FOMErrorNames, FOMErrorNames)
+ UFOMs = np.atleast_2d([datasets[1][var].meanValue for var in FOMErrorNames]).reshape(-1,1)
+ # # 3. Compute mean and variance in the error space:
+ datasets[0] = self._computeMoments(datasets[0],['err_' + s.split("|")[-1] for s in self.prototypeParameters],['err_' + s2.split("|")[-1] for s2 in self.prototypeOutputs])
+ datasets[1] = self._computeMoments(datasets[1],['err_' + s.split("|")[-1] for s in self.targetParameters],['err_' + s2.split("|")[-1] for s2 in self.targetOutputs])
+ # # 4. Compute Uncertainties in parameters
+ UparVar = self._computeUncertaintyMatrixInErrors(datasets[0],['err_' + s.split("|")[-1] for s in self.prototypeParameters])
+ if np.linalg.matrix_rank(UparVar) < np.shape(UparVar)[0]:
+ UparVar = UparVar + np.diag(np.ones(np.shape(UparVar)[0])*np.finfo(np.float32).eps)
+ # # 5. Compute Uncertainties in outputs
+ # Outputs of Mock model (Measurables F_i)
+ UMeasurablesVar = self._computeUncertaintyMatrixInErrors(datasets[0],['err_' + s.split("|")[-1] for s in self.prototypeOutputs])
+ UMeasurablesVar = np.diag(np.diag(UMeasurablesVar))
+ # Outputs of Target model (Targets FOM_i)
+ UFOMsVar = self._computeUncertaintyMatrixInErrors(datasets[1],['err_' + s.split("|")[-1] for s in self.targetOutputs])
+ # # 6. Compute Normalized Uncertainties
+ # In mock experiment outputs (measurables)
+ sens = self.stat[self.featureDataObject[-1]].run({"Data":[[None, None, datasets[self.featureDataObject[-1]]]]})
+ # normalize sensitivities
+ senMeasurables = self._generateSensitivityMatrix(self.prototypeOutputs, self.prototypeParameters, sens, datasets[0])
+ # In target outputs (FOMs)
+ sens = self.stat[self.targetDataObject[-1]].run({"Data":[[None, None, datasets[self.targetDataObject[-1]]]]})
+ # normalize sensitivities
+ senFOMs = self._generateSensitivityMatrix(self.targetOutputs, self.targetParameters, sens, datasets[1])
+ # # 7. Compute representativities
+ r,rExact = self._calculateBiasFactor(senMeasurables, senFOMs, UparVar, UMeasurablesVar)
+ # # 8. Compute corrected Uncertainties
+ UtarVarTilde = self._calculateCovofTargetErrorsfromBiasFactor(senFOMs,UparVar,r)
+ UtarVarTildeExact = self._calculateCovofTargetErrorsfromBiasFactor(senFOMs,UparVar,rExact)
+ # # 9 Compute Corrected Targets,
+ # for var in self.targetOutputs:
+ # self._getDataFromDatasets(datasets, var, names=None)
+ parametersNames = [s.split("|")[-1] for s in self.prototypeParameters]
+ par = np.atleast_2d([datasets[0][var].meanValue for var in parametersNames]).reshape(-1,1)
+ correctedTargets, correctedTargetCovariance, correctedTargetErrorCov, UtarVarTilde_no_Umes_var, Inner1 = self._targetCorrection(FOMs, UparVar, UMeasurables, UMeasurablesVar, senFOMs, senMeasurables)
+ correctedParameters, correctedParametersCovariance = self._parameterCorrection(par, UparVar, UMeasurables, UMeasurablesVar, senMeasurables)
+
+ # # 9. Create outputs
+ """
+ Assuming the number of parameters is P,
+ number of measurables in the mock/prototype experiment is M,
+ and the number of figure of merits (FOMS) is F, then the representativity outcomes to be reported are:
+
+ BiasFactor: $R \in \mathbb{R}^{M \times F}$ reported element by element as BiasFactor_MockFi_TarFj
+ ExactBiasFactor: same as the bias factor but assuming measureables are also uncertain.
+ CorrectedParameters: best parameters to perform the measurements at parTilde \in \mathbb{R}^{P}
+ UncertaintyinCorrectedParameters: $parTildeVar \in \mathbb{R}^{P \times P}$
+ CorrectedTargets: $TarTilde \in \mathbb{R}^{F}$
+ UncertaintyinCorrectedTargets:$TarTildeVar \in \mathbb{R}^{F \times F}$
+ ExactUncertaintyinCorrectedTargets:$TarTildeVar \in \mathbb{R}^{F \times F}$
+ """
+ outs = {}
+ for i,param in enumerate(self.prototypeParameters):
+ name4 = "CorrectedParameters_{}".format(param.split("|")[-1])
+ outs[name4] = correctedParameters[i]
+ for j, param2 in enumerate(self.prototypeParameters):
+ if param == param2:
+ name5 = "VarianceInCorrectedParameters_{}".format(param.split("|")[-1])
+ outs[name5] = correctedParametersCovariance[i,i]
+ else:
+ name6 = "CovarianceInCorrectedParameters_{}_{}".format(param.split("|")[-1],param2.split("|")[-1])
+ outs[name6] = correctedParametersCovariance[i,j]
+
+ for i,targ in enumerate(self.targetOutputs):
+ name3 = "CorrectedTargets_{}".format(targ.split("|")[-1])
+ outs[name3] = correctedTargets[i]
+ for j,feat in enumerate(self.prototypeOutputs):
+ name1 = "BiasFactor_Mock{}_Tar{}".format(feat.split("|")[-1], targ.split("|")[-1])
+ name2 = "ExactBiasFactor_Mock{}_Tar{}".format(feat.split("|")[-1], targ.split("|")[-1])
+ outs[name1] = r[i,j]
+ outs[name2] = rExact[i,j]
+ for k,tar in enumerate(self.targetOutputs):
+ if k == i:
+ name3 = "CorrectedVar_Tar{}".format(tar.split("|")[-1])
+ name4 = "ExactCorrectedVar_Tar{}".format(tar.split("|")[-1])
+ else:
+ name3 = "CorrectedCov_Tar{}_Tar{}".format(targ.split("|")[-1], tar.split("|")[-1])
+ name4 = "ExactCorrectedCov_Tar{}_Tar{}".format(targ.split("|")[-1], tar.split("|")[-1])
+ outs[name3] = UtarVarTilde[i,k]
+ outs[name4] = UtarVarTildeExact[i,k]
+ return outs
+
+ def _generateSensitivityMatrix(self, outputs, inputs, sensDict, datasets, normalize=True):
+ """
+ Reconstruct sensitivity matrix from the Basic Statistic calculation
+ @ In, inputs, list, list of input variables
+ @ In, outputs, list, list of output variables
+ @ In, sensDict, dict, dictionary contains the sensitivities
+ @ Out, sensMatr, numpy.array, 2-D array of the reconstructed sensitivity matrix
+ """
+ sensMatr = np.zeros((len(outputs), len(inputs)))
+ inputVars = [x.split("|")[-1] for x in inputs]
+ outputVars = [x.split("|")[-1] for x in outputs]
+ for i, outVar in enumerate(outputVars):
+ for j, inpVar in enumerate(inputVars):
+ senName = "{}_{}_{}".format(self.senPrefix, outVar, inpVar)
+ # Assume static data (PointSets are provided as input)
+ if not normalize:
+ sensMatr[i, j] = sensDict[senName][0]
+ else:
+ sensMatr[i, j] = sensDict[senName][0]* datasets[inpVar].meanValue / datasets[outVar].meanValue
+ return sensMatr
+
+ def _computeMoments(self, datasets, features, targets):
+ """
+ A utility function to compute moments, mean value, variance and covariance
+ @ In, datasets, xarray datasets, datasets containing prototype (mock) data and target data
+ @ In, features, names of feature variables: measurables
+ @ In, targets, names of target variables: figures of merit (FOMs)
+ @ out, datasets, xarray datasets, datasets after adding moments
+ """
+ moments = datasets.copy()
+ for var in [x.split("|")[-1] for x in features + targets]:
+ moments[var].attrs['meanValue'] = np.mean(datasets[var].values)
+ for var2 in [x.split("|")[-1] for x in features + targets]:
+ if var == var2:
+ moments[var2].attrs['var'] = np.var(datasets[var].values)
+ else:
+ moments[var2].attrs['cov_'+str(var)] = np.cov(datasets[var2].values,datasets[var].values)
+ return moments
+
+ def _computeErrors(self,datasets,features,targets):
+ """
+ A utility function to transform variables to the relative error of these variable
+ @ In, datasets, xarray datasets, datasets containing prototype (mock) data and target data
+ @ In, features, names of feature variables: measurables
+ @ In, targets, names of target variables: figures of merit (FOMs)
+ @ out, datasets, xarray datasets, datasets after computing errors in each variable
+ """
+ errors = datasets.copy()
+ for var in [x.split("|")[-1] for x in features + targets]:
+ errors['err_'+str(var)] = (datasets[var].values - datasets[var].attrs['meanValue'])/datasets[var].attrs['meanValue']
+ return errors
+ def _computeUncertaintyMatrixInErrors(self, data, parameters):
+ """
+ A utility function to variance and covariance of variables in the error space
+ @ In, data, xarray dataset, data containing either prototype (mock) data or target data
+ @ In, parameters, names of parameters/inputs to each model
+ @ out, uncertMatr, np.array, The variance covariance matrix of errors
+ """
+ uncertMatr = np.zeros((len(parameters), len(parameters)))
+ for i, var1 in enumerate(parameters):
+ for j, var2 in enumerate(parameters):
+ if var1 == var2:
+ uncertMatr[i, j] = data[var1].attrs['var']
+ else:
+ uncertMatr[i, j] = data[var1].attrs['cov_'+var2][0,1]
+ return uncertMatr
+
+ def _calculateBiasFactor(self, normalizedSenExp, normalizedSenTar, UparVar, UmesVar=None):
+ """
+ A utility function to compute the bias factor (i.e., representativity factor)
+ @ In, normalizedSenExp, np.array, the normalized sensitivities of the mock/prototype measurables
+ @ In, normalizedSenTar, np.array, the normalized sensitivities of the target variables/Figures of merit (FOMs) with respect to the parameters
+ @ In, UparVar, np.array, variance covariance matrix of the parameters error
+ @ In, UmesVar, np.array, variance covariance matrix of the measurables error, default is None
+ @ Out, r, np.array, the representativity (bias factor) matrix neglecting uncertainties in measurables
+ @ Out, rExact, np.array, the representativity (bias factor) matrix considering uncertainties in measurables
+ """
+ if UmesVar is None:
+ UmesVar = np.zeros((len(normalizedSenExp), len(normalizedSenExp)))
+ # Compute representativity (#eq 79)
+ tol = 1e-6
+ r = (sp.linalg.pinv(sqrtm(normalizedSenTar @ UparVar @ normalizedSenTar.T),rtol=tol) @ (normalizedSenTar @ UparVar @ normalizedSenExp.T) @ sp.linalg.pinv(sqrtm(normalizedSenExp @ UparVar @ normalizedSenExp.T),rtol=tol)).real
+ rExact = (sp.linalg.pinv(sqrtm(normalizedSenTar @ UparVar @ normalizedSenTar.T),rtol=tol) @ (normalizedSenTar @ UparVar @ normalizedSenExp.T) @ sp.linalg.pinv(sqrtm(normalizedSenExp @ UparVar @ normalizedSenExp.T + UmesVar),rtol=tol)).real
+ return r, rExact
+
+ def _calculateCovofTargetErrorsfromBiasFactor(self, normalizedSenTar, UparVar, r):
+ """
+ A utility function to compute variance covariance matrix of the taget errors from the bias factors
+ @ In, normalizedSenTar, np.array, the normalized sensitivities of the targets
+ @ In, UparVar, np.array, the variance covariance matrix of the parameters in the error space
+ @ In, r, np.array, the bias factor matrix
+ @ Out, UtarVarTilde, np.array, the variance convariance matrix of error in the corrected targets
+ """
+ # re-compute Utar_var_tilde from r (#eq 80)
+ chol = sqrtm(normalizedSenTar @ UparVar @ normalizedSenTar.T).real
+ UtarVarTilde = chol @ (np.eye(np.shape(r)[0]) - r @ r.T) @ chol
+ return UtarVarTilde
+
+ def _parameterCorrection(self, par, UparVar, Umes, UmesVar, normalizedSen): #eq 48 and eq 67
+ """
+ A utility function that computes the correction in parameters
+ @ In, par, np.array, the parameters (inputs) of the mock experiment
+ @ In, UparVar, np.array, variance covariance matrix of the parameters in the error space
+ @ In, Umes, np.array, the error in measurements
+ @ In, UmesVar, np.array, variance covariance matrix of the measurables in the error space
+ @ In, normalizedSen, np.array, the normalized sensitivity matrix
+ @ Out, parTilde, np.array, the corrected parameters
+ @ Out, parTildeVar, np.array, the variance covariance matrix of the corrected parameters (uncertainty in the corrected parameters)
+ """
+ # Compute adjusted par #eq 48
+ UparTilde = UparVar @ normalizedSen.T @ np.linalg.pinv(normalizedSen @ UparVar @ normalizedSen.T + UmesVar) @ Umes
+
+ # back transform to parameters
+ parTilde = UparTilde * par + par
+
+ # Compute adjusted par_var #eq 67
+ UparVarTilde = UparVar - UparVar @ normalizedSen.T @ np.linalg.pinv(normalizedSen @ UparVar @ normalizedSen.T + UmesVar) @ normalizedSen @ UparVar
+
+ # back transform the variance
+ UparVarTildeDiag = np.diagonal(UparVarTilde)
+ for ind,c in enumerate(UparVarTildeDiag):
+ if c<0:
+ UparVarTilde[ind,ind] = 0
+ UparVarTildeDiag2 = np.sqrt(UparVarTildeDiag)
+ UparVarTildeDiag3 = UparVarTildeDiag2 * np.squeeze(par)
+ parVarTilde = np.square(UparVarTildeDiag3)
+ parVarTilde = np.diag(parVarTilde)
+ return parTilde, parVarTilde
+
+ def _targetCorrection(self, FOMs, UparVar, Umes, UmesVar, normalizedSenTar, normalizedSenExp):
+ """
+ A utility function to compute corrections in targets based on the representativity analysis
+ @ In, FOMs, np.array, target out puts (Figures of merit)
+ @ In, UparVar, np.array, np.array, variance covariance matrix of the parameters in the error space
+ @ In, Umes, np.array, the error in measurements
+ @ In, UmesVar, np.array, variance covariance matrix of the measurables in the error space
+ @ In, normalizedSenTar, np.array, normalized sensitivities of the target outputs w.r.t. parameterts
+ @ In, normalizedSenExp, np.array, normalized sensitivities of the mock prototype/experiment outputs (measurements) w.r.t. parameterts
+ @ Out, tarTilde, np.array, corrected targets (FOMs)
+ @ Out, tarVarTilde, np.array, variance covariance matrix for the corrected targets
+ @ Out, UtarVarTilde, np.array, variance covariance matrix for the corrected targets in error space
+ @ Out, UtarVartilde_no_UmesVar, np.array, variance covariance matrix for the corrected targets in error space assuming no uncer
+ @ Out, propagetedExpUncert, np.array, propagated variance covariance matrix of experiments due to parameter uncertainties
+ """
+ # Compute adjusted target #eq 71
+ UtarTilde = normalizedSenTar @ UparVar @ normalizedSenExp.T @ np.linalg.pinv(normalizedSenExp @ UparVar @ normalizedSenExp.T + UmesVar) @ Umes
+ # back transform to parameters
+ tarTilde = UtarTilde * FOMs + FOMs
+
+ # Compute adjusted par_var #eq 74
+ UtarVarTilde = normalizedSenTar @ UparVar @ normalizedSenTar.T - normalizedSenTar @ UparVar @ normalizedSenExp.T @ np.linalg.pinv(normalizedSenExp @ UparVar @ normalizedSenExp.T + UmesVar) @ normalizedSenExp @ UparVar @ normalizedSenTar.T
+
+ # back transform the variance
+ UtarVarTildeDiag = np.diagonal(UtarVarTilde)
+ for ind,c in enumerate(UtarVarTildeDiag):
+ if c<0:
+ UtarVarTilde[ind,ind] = 0
+ UtarVarTildeDiag2 = np.sqrt(UtarVarTildeDiag)
+ UtarVarTildeDiag3 = UtarVarTildeDiag2 * np.squeeze(FOMs)
+ tarVarTilde = np.square(UtarVarTildeDiag3)
+ tarVarTilde = np.diag(tarVarTilde)
+
+ # Compute adjusted par_var neglecting UmesVar (to compare to representativity)
+ # The representativity (#eq 79 negelcts UmesVar)
+ propagetedExpUncert = (normalizedSenExp @ UparVar) @ normalizedSenExp.T
+ UtarVartilde_no_UmesVar = (normalizedSenTar @ UparVar @ normalizedSenTar.T)\
+ - (normalizedSenTar @ UparVar @ normalizedSenExp.T)\
+ @ np.linalg.pinv(normalizedSenExp @ UparVar @ normalizedSenExp.T)\
+ @ (normalizedSenExp @ UparVar @ normalizedSenTar.T)
+ return tarTilde, tarVarTilde, UtarVarTilde, UtarVartilde_no_UmesVar, propagetedExpUncert
diff --git a/ravenframework/Models/PostProcessors/Validations/__init__.py b/ravenframework/Models/PostProcessors/Validations/__init__.py
index 88d55b68bf..3d1bc5c2ba 100644
--- a/ravenframework/Models/PostProcessors/Validations/__init__.py
+++ b/ravenframework/Models/PostProcessors/Validations/__init__.py
@@ -19,4 +19,5 @@
@author: wangc
"""
from .Probabilistic import Probabilistic
+from .Representativity import Representativity
from .PPDSS import PPDSS
diff --git a/ravenframework/Samplers/AdaptiveMonteCarlo.py b/ravenframework/Samplers/AdaptiveMonteCarlo.py
index b97996a5d1..f6fa973775 100644
--- a/ravenframework/Samplers/AdaptiveMonteCarlo.py
+++ b/ravenframework/Samplers/AdaptiveMonteCarlo.py
@@ -187,7 +187,7 @@ def localFinalizeActualSampling(self, jobObject, model, myInput):
@ Out, None
"""
if self.counter > 1:
- output = self.basicStatPP.run(self._targetEvaluation)
+ output = self.basicStatPP._runLegacy(self._targetEvaluation)
output['solutionUpdate'] = np.asarray([self.counter - 1])
self._solutionExport.addRealization(output)
self.checkConvergence(output)
diff --git a/tests/framework/AnalyticModels/expLinModel.py b/tests/framework/AnalyticModels/expLinModel.py
new file mode 100644
index 0000000000..bc8537cd3d
--- /dev/null
+++ b/tests/framework/AnalyticModels/expLinModel.py
@@ -0,0 +1,63 @@
+# Copyright 2017 Battelle Energy Alliance, LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#***************************************
+#* Simple analytic test ExternalModule *
+#***************************************
+#
+# Simulates a steady state linear model that maps $J-$parameters (i.e., $\mathbb{R}^J$) to k Responses
+#
+# External Modules
+import numpy as np
+##################
+
+# A = np.array([[2, -3],[1,8],[-5, -5]])
+# b = np.array([[0],[0],[0]])
+
+def run(self,Input):
+ """
+ Method require by RAVEN to run this as an external model.
+ @ In, self, object, object to store members on
+ @ In, Input, dict, dictionary containing inputs from RAVEN
+ @ Out, None
+ """
+ self.F1,self.F2,self.F3 = main(Input)
+
+def main(Input):
+ """
+ This method computes linear responses based on Inputs. i.e., $$y = A @ x$$
+
+ @ In, Input, dict, dictionary containing inputs from RAVEN
+ @ out, y[:], elements of response vector y
+ """
+ m = len([key for key in Input.keys() if 'e' in key]) # number of experiments
+ n = len([par for par in Input.keys() if 'p' in par]) # number of parameters
+ A = np.array([Input['e1'],Input['e2'],Input['e3']]).reshape(-1,n)
+ b = Input['bE'].reshape(-1,1)
+ x = np.atleast_2d(np.array([Input['p1'],Input['p2']])).reshape(-1,1)
+ assert(np.shape(A)[1],np.shape(b)[0])
+ assert(np.shape(A)[0],np.shape(b)[0],m)
+ y = A @ x + b
+ return y[:]
+
+
+if __name__ == '__main__':
+ Input = {}
+ Input['e1'] = [2,-3]
+ Input['e2'] = [1,8]
+ Input['e3'] = [-5, -5]
+ Input['bE'] = np.array([[0],[0],[0]])
+ Input['p1'] = 5.5
+ Input['p2'] = 8
+ a,b,c = main(Input)
+ print(a,b,c)
diff --git a/tests/framework/AnalyticModels/singleExpLinModel.py b/tests/framework/AnalyticModels/singleExpLinModel.py
new file mode 100644
index 0000000000..b9aea8c6a4
--- /dev/null
+++ b/tests/framework/AnalyticModels/singleExpLinModel.py
@@ -0,0 +1,47 @@
+# Copyright 2017 Battelle Energy Alliance, LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#***************************************
+#* Simple analytic test ExternalModule *
+#***************************************
+#
+# Simulates a steady state linear model that maps $J-$parameters (i.e., $\mathbb{R}^J$) to k Responses
+#
+# External Modules
+import numpy as np
+##################
+# Author: Mohammad Abdo (@Jimmy-INL)
+def run(self,Input):
+ """
+ Method require by RAVEN to run this as an external model.
+ @ In, self, object, object to store members on
+ @ In, Input, dict, dictionary containing inputs from RAVEN
+ @ Out, None
+ """
+ self.F1 = main(Input)
+
+def main(Input):
+ """
+ Experiment Model evaluation method
+ @ In, Input, dict, dictionary containing inputs from RAVEN
+ @ Out, y[:], floats, list of response values from the linear model $ y = Ax+b $
+ """
+ m = len([key for key in Input.keys() if 'e' in key]) # number of experiments
+ n = len([par for par in Input.keys() if 'p' in par]) # number of parameters
+ A = np.array([Input['e1']]).reshape(-1,n)
+ b = Input['bE'].reshape(-1,1)
+ x = np.atleast_2d(np.array([Input['p1'],Input['p2']])).reshape(-1,1)
+ assert(np.shape(A)[1],np.shape(b)[0],n)
+ assert(np.shape(A)[0],np.shape(b)[0],m)
+ y = A @ x + b
+ return y[:]
\ No newline at end of file
diff --git a/tests/framework/AnalyticModels/singleTarLinModel.py b/tests/framework/AnalyticModels/singleTarLinModel.py
new file mode 100644
index 0000000000..9c049ab367
--- /dev/null
+++ b/tests/framework/AnalyticModels/singleTarLinModel.py
@@ -0,0 +1,47 @@
+# Copyright 2017 Battelle Energy Alliance, LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#***************************************
+#* Simple analytic test ExternalModule *
+#***************************************
+#
+# Simulates a steady state linear model that maps $J-$parameters (i.e., $\mathbb{R}^J$) to k Responses
+#
+# External Modules
+import numpy as np
+##################
+# Author: Mohammad Abdo (@Jimmy-INL)
+def run(self,Input):
+ """
+ Method require by RAVEN to run this as an external model.
+ @ In, self, object, object to store members on
+ @ In, Input, dict, dictionary containing inputs from RAVEN
+ @ Out, None
+ """
+ self.FOM1 = main(Input)
+
+def main(Input):
+ """
+ Target Model evaluation method
+ @ In, Input, dict, dictionary containing inputs from RAVEN
+ @ Out, y[:], floats, list of response values from the linear model $ y = Ax+b $
+ """
+ m = len([key for key in Input.keys() if 'o' in key]) # number of experiments
+ n = len([par for par in Input.keys() if 'p' in par]) # number of parameters
+ A = np.array([Input['o1']]).reshape(-1,n)
+ b = Input['bT'].reshape(-1,1)
+ x = np.atleast_2d(np.array([Input['p1'],Input['p2']])).reshape(-1,1)
+ assert(np.shape(A)[1],np.shape(b)[0],n)
+ assert(np.shape(A)[0],np.shape(b)[0],m)
+ y = A @ x + b
+ return y[:]
\ No newline at end of file
diff --git a/tests/framework/AnalyticModels/tarLinModel.py b/tests/framework/AnalyticModels/tarLinModel.py
new file mode 100644
index 0000000000..a529f24ce1
--- /dev/null
+++ b/tests/framework/AnalyticModels/tarLinModel.py
@@ -0,0 +1,64 @@
+# Copyright 2017 Battelle Energy Alliance, LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#***************************************
+#* Simple analytic test ExternalModule *
+#***************************************
+#
+# Simulates a steady state linear model that maps $J-$parameters (i.e., $\mathbb{R}^J$) to k Responses
+#
+# External Modules
+import numpy as np
+##################
+# Author: Mohammad Abdo (@Jimmy-INL)
+
+# A = np.array([[2, -3],[1,8],[-5, -5]])
+# b = np.array([[0],[0],[0]])
+
+def run(self,Input):
+ """
+ Method require by RAVEN to run this as an external model.
+ @ In, self, object, object to store members on
+ @ In, Input, dict, dictionary containing inputs from RAVEN
+ @ Out, None
+ """
+ self.FOM1,self.FOM2,self.FOM3 = main(Input)
+
+def main(Input):
+ """
+ This method computes linear responses of the target application based on Inputs. i.e., $$y = A @ x$$
+
+ @ In, Input, dict, dictionary containing inputs from RAVEN
+ @ out, y[:], elements of response vector y
+ """
+ m = len([key for key in Input.keys() if 'o' in key]) # number of experiments
+ n = len([par for par in Input.keys() if 'p' in par]) # number of parameters
+ A = np.array([Input['o1'],Input['o2'],Input['o3']]).reshape(-1,n)
+ b = Input['bT'].reshape(-1,1)
+ x = np.atleast_2d(np.array([Input['p1'],Input['p2']])).reshape(-1,1)
+ assert(np.shape(A)[1],np.shape(b)[0])
+ assert(np.shape(A)[0],np.shape(b)[0],m)
+ y = A @ x + b
+ return y[:]
+
+
+if __name__ == '__main__':
+ Input = {}
+ Input['o1'] = [2,-3]
+ Input['o2'] = [1,8]
+ Input['o3'] = [-5, -5]
+ Input['bT'] = np.array([[0],[0],[0]])
+ Input['p1'] = 5.5
+ Input['p2'] = 8
+ a,b,c = main(Input)
+ print(a,b,c)
diff --git a/tests/framework/PostProcessors/Validation/gold/RepresentativityPerfectMatch/pp1_metric_dump.csv b/tests/framework/PostProcessors/Validation/gold/RepresentativityPerfectMatch/pp1_metric_dump.csv
new file mode 100644
index 0000000000..e28c0e54e5
--- /dev/null
+++ b/tests/framework/PostProcessors/Validation/gold/RepresentativityPerfectMatch/pp1_metric_dump.csv
@@ -0,0 +1,2 @@
+BiasFactor_MockF1_TarFOM1,BiasFactor_MockF1_TarFOM2,BiasFactor_MockF1_TarFOM3,BiasFactor_MockF2_TarFOM1,BiasFactor_MockF2_TarFOM2,BiasFactor_MockF2_TarFOM3,BiasFactor_MockF3_TarFOM1,BiasFactor_MockF3_TarFOM2,BiasFactor_MockF3_TarFOM3,ExactBiasFactor_MockF1_TarFOM1,ExactBiasFactor_MockF1_TarFOM2,ExactBiasFactor_MockF1_TarFOM3,ExactBiasFactor_MockF2_TarFOM1,ExactBiasFactor_MockF2_TarFOM2,ExactBiasFactor_MockF2_TarFOM3,ExactBiasFactor_MockF3_TarFOM1,ExactBiasFactor_MockF3_TarFOM2,ExactBiasFactor_MockF3_TarFOM3,CorrectedParameters_p1,CorrectedParameters_p2,CorrectedTargets_FOM1,CorrectedTargets_FOM2,CorrectedTargets_FOM3,VarianceInCorrectedParameters_p1,VarianceInCorrectedParameters_p2,CovarianceInCorrectedParameters_p1_p2,CovarianceInCorrectedParameters_p2_p1,CorrectedVar_TarFOM1,CorrectedVar_TarFOM2,CorrectedVar_TarFOM3,ExactCorrectedVar_TarFOM1,ExactCorrectedVar_TarFOM2,ExactCorrectedVar_TarFOM3,CorrectedCov_TarFOM1_TarFOM2,CorrectedCov_TarFOM2_TarFOM1,CorrectedCov_TarFOM1_TarFOM3,CorrectedCov_TarFOM3_TarFOM1,CorrectedCov_TarFOM2_TarFOM3,CorrectedCov_TarFOM3_TarFOM2,ExactCorrectedCov_TarFOM1_TarFOM2,ExactCorrectedCov_TarFOM2_TarFOM1,ExactCorrectedCov_TarFOM1_TarFOM3,ExactCorrectedCov_TarFOM3_TarFOM1,ExactCorrectedCov_TarFOM2_TarFOM3,ExactCorrectedCov_TarFOM3_TarFOM2
+0.956931548103,0.158944500315,-0.119856079619,0.167914857535,0.379594787803,0.453551486076,-0.120864711712,0.460352642151,0.663418263045,0.622953692665,0.168703460234,0.00999759077449,0.333561740878,0.275229496201,0.254849386412,0.0278303625985,0.355873728293,0.470485463976,5.50515445786,8.21264618362,-12.7004102917,68.6723913603,-66.9629170421,0.16494093266,0.172276908973,0.0,0.0,-1.77349989183e-16,-1.24861665582e-17,6.93021929434e-18,0.0143995547479,0.0021311702406,0.00158930335822,-5.43546114151e-17,-5.44174372803e-17,-1.1382507111e-17,-1.12204825963e-17,2.16377372229e-18,2.23066101281e-18,0.00429411398383,0.00429411398383,0.000763476797026,0.000763476797026,0.00137548132103,0.00137548132103
diff --git a/tests/framework/PostProcessors/Validation/gold/RepresentativityPerfectSingleMeasurable/pp1_metric_dump.csv b/tests/framework/PostProcessors/Validation/gold/RepresentativityPerfectSingleMeasurable/pp1_metric_dump.csv
new file mode 100644
index 0000000000..b506d1d03d
--- /dev/null
+++ b/tests/framework/PostProcessors/Validation/gold/RepresentativityPerfectSingleMeasurable/pp1_metric_dump.csv
@@ -0,0 +1,2 @@
+BiasFactor_MockF1_TarFOM1,ExactBiasFactor_MockF1_TarFOM1,CorrectedParameters_p1,CorrectedParameters_p2,CorrectedTargets_FOM1,VarianceInCorrectedParameters_p1,VarianceInCorrectedParameters_p2,CovarianceInCorrectedParameters_p1_p2,CovarianceInCorrectedParameters_p2_p1,CorrectedVar_TarFOM1,ExactCorrectedVar_TarFOM1
+0.999917799082,0.707120008205,5.50515445786,8.21264618362,-12.7004102917,0.213154131346,0.34725436153,0.0,0.0,6.38668105813e-06,0.0194240671795
diff --git a/tests/framework/PostProcessors/Validation/gold/RepresentativityrankDifficient/pp1_metric_dump.csv b/tests/framework/PostProcessors/Validation/gold/RepresentativityrankDifficient/pp1_metric_dump.csv
new file mode 100644
index 0000000000..8c304c2ccd
--- /dev/null
+++ b/tests/framework/PostProcessors/Validation/gold/RepresentativityrankDifficient/pp1_metric_dump.csv
@@ -0,0 +1,2 @@
+BiasFactor_MockF1_TarFOM1,BiasFactor_MockF1_TarFOM2,BiasFactor_MockF1_TarFOM3,BiasFactor_MockF2_TarFOM1,BiasFactor_MockF2_TarFOM2,BiasFactor_MockF2_TarFOM3,BiasFactor_MockF3_TarFOM1,BiasFactor_MockF3_TarFOM2,BiasFactor_MockF3_TarFOM3,ExactBiasFactor_MockF1_TarFOM1,ExactBiasFactor_MockF1_TarFOM2,ExactBiasFactor_MockF1_TarFOM3,ExactBiasFactor_MockF2_TarFOM1,ExactBiasFactor_MockF2_TarFOM2,ExactBiasFactor_MockF2_TarFOM3,ExactBiasFactor_MockF3_TarFOM1,ExactBiasFactor_MockF3_TarFOM2,ExactBiasFactor_MockF3_TarFOM3,CorrectedParameters_p1,CorrectedParameters_p2,CorrectedTargets_FOM1,CorrectedTargets_FOM2,CorrectedTargets_FOM3,VarianceInCorrectedParameters_p1,VarianceInCorrectedParameters_p2,CovarianceInCorrectedParameters_p1_p2,CovarianceInCorrectedParameters_p2_p1,CorrectedVar_TarFOM1,CorrectedVar_TarFOM2,CorrectedVar_TarFOM3,ExactCorrectedVar_TarFOM1,ExactCorrectedVar_TarFOM2,ExactCorrectedVar_TarFOM3,CorrectedCov_TarFOM1_TarFOM2,CorrectedCov_TarFOM2_TarFOM1,CorrectedCov_TarFOM1_TarFOM3,CorrectedCov_TarFOM3_TarFOM1,CorrectedCov_TarFOM2_TarFOM3,CorrectedCov_TarFOM3_TarFOM2,ExactCorrectedCov_TarFOM1_TarFOM2,ExactCorrectedCov_TarFOM2_TarFOM1,ExactCorrectedCov_TarFOM1_TarFOM3,ExactCorrectedCov_TarFOM3_TarFOM1,ExactCorrectedCov_TarFOM2_TarFOM3,ExactCorrectedCov_TarFOM3_TarFOM2
+0.128430862967,0.689134736572,-0.0927298672854,0.128430862967,0.689134736572,-0.0927298672854,0.582902422521,0.000702674732939,0.812541858656,0.12322065411,0.553725781283,-0.0465852501921,0.12322065411,0.553725781283,-0.0465852501921,0.391200280662,0.128987035362,0.494626127249,5.50515445786,8.21264618362,34.6822822385,68.6723913603,-66.9629170421,0.149346499179,0.179681468724,0.0,0.0,-3.39552860427e-18,-2.97038521552e-18,-3.81029875244e-18,0.00183645364595,0.00231975244857,0.00186608541152,-2.86314843314e-18,-2.88211952166e-18,-3.60551750031e-18,-3.52137194756e-18,-2.84730452175e-18,-2.78886225425e-18,0.00187787000787,0.00187787000787,0.00182011762032,0.00182011762032,0.0017035765031,0.0017035765031
diff --git a/tests/framework/PostProcessors/Validation/test_representativity_perfectLinExpToTarget.xml b/tests/framework/PostProcessors/Validation/test_representativity_perfectLinExpToTarget.xml
new file mode 100644
index 0000000000..8909070621
--- /dev/null
+++ b/tests/framework/PostProcessors/Validation/test_representativity_perfectLinExpToTarget.xml
@@ -0,0 +1,183 @@
+
+
+
+ RepresentativityPerfectMatch
+ mcRunExp, mcRunTar, PP1
+ 1
+
+
+
+ framework/PostProcessors/Validation/test_validation_representativity1
+ Mohammad Abdo (@Jimmy-INL)
+ 2021-04-29
+ PostProcessors.Validation
+
+ This test assesses the mechanics of the representativity workflow; one of the validation algorithms used in RAVEN.
+ This test a linear model as both the mock experiment and the target plant models. The expected representativity factor should be close to one for each measurable F_i and Figure of merit FOM_i. Currently the test utilizes the bias factor metric to compute the representativity factors.
+
+
+ Added Modification for new PP API
+
+
+
+
+
+ p1, p2, e1, e2, e3, bE
+ F1, F2, F3
+
+
+ p1, p2, o1, o2, o3, bT
+ FOM1, FOM2, FOM3
+
+
+ outputDataMC1|Output|F1, outputDataMC1|Output|F2, outputDataMC1|Output|F3
+ outputDataMC2|Output|FOM1, outputDataMC2|Output|FOM2, outputDataMC2|Output|FOM3
+ outputDataMC1|Input|p1,outputDataMC1|Input|p2
+ outputDataMC2|Input|p1,outputDataMC2|Input|p2
+
+
+
+
+
+ 5.5
+ 0.55
+
+
+ 8
+ 0.8
+
+
+
+
+
+
+ 42
+ 100
+
+
+ dist1
+
+
+ dist2
+
+ 2,-3
+ 1, 8
+ -5,-5
+ 0,0,0
+
+
+
+ 100
+ 2019
+
+
+ dist1
+
+
+ dist2
+
+ 2,-3
+ 1, 8
+ -5,-5
+ 0,0,0
+
+
+
+
+
+
+ inputPlaceHolder2
+ linModel
+ ExperimentMCSampler
+
+
+
+
+ inputPlaceHolder2
+ tarModel
+ TargetMCSampler
+
+
+
+
+ outputDataMC1
+ outputDataMC2
+ pp1
+
+
+
+
+
+
+
+ p1,p2
+
+
+
+ p1,p2
+
+
+
+ p1,p2
+
+
+
+ InputPlaceHolder
+
+
+
+
+
+
+ csv
+
+
+
+
+
diff --git a/tests/framework/PostProcessors/Validation/test_representativity_rankDifficientLinExpToTarget.xml b/tests/framework/PostProcessors/Validation/test_representativity_rankDifficientLinExpToTarget.xml
new file mode 100644
index 0000000000..fe37603f48
--- /dev/null
+++ b/tests/framework/PostProcessors/Validation/test_representativity_rankDifficientLinExpToTarget.xml
@@ -0,0 +1,181 @@
+
+
+
+ RepresentativityrankDifficient
+ mcRunExp, mcRunTar, PP1
+ 1
+
+
+
+ framework/PostProcessors/Validation/test_validation_representativity3
+ Mohammad Abdo (@Jimmy-INL)
+ 2021-04-29
+ PostProcessors.Validation
+
+ This test assesses the mechanics of the representativity workflow; one of the validation algorithms used in RAVEN.
+ This tests a linear model as both the mock experiment and the target plant models. The linear operators describing the physics are rank difficient. This is done intentionally by making two experiments (out of the three) identical
+
+
+
+ Added Modification for new PP API
+
+
+
+
+
+ p1, p2, e1, e2, e3, bE
+ F1, F2, F3
+
+
+ p1, p2, o1, o2, o3, bT
+ FOM1, FOM2, FOM3
+
+
+ outputDataMC1|Output|F1, outputDataMC1|Output|F2, outputDataMC1|Output|F3
+ outputDataMC2|Output|FOM1, outputDataMC2|Output|FOM2, outputDataMC2|Output|FOM3
+ outputDataMC1|Input|p1,outputDataMC1|Input|p2
+ outputDataMC2|Input|p1,outputDataMC2|Input|p2
+
+
+
+
+
+ 5.5
+ 0.55
+
+
+ 8
+ 0.8
+
+
+
+
+
+
+ 42
+ 100
+
+
+ dist1
+
+
+ dist2
+
+ 2,-3
+ 2,-3
+ -5,-5
+ 0,0,0
+
+
+
+ 100
+ 2019
+
+
+ dist1
+
+
+ dist2
+
+ 2,3
+ 1, 8
+ -5,-5
+ 0,0,0
+
+
+
+
+
+ inputPlaceHolder2
+ linModel
+ ExperimentMCSampler
+
+
+
+ inputPlaceHolder2
+ tarModel
+ TargetMCSampler
+
+
+
+ outputDataMC1
+ outputDataMC2
+ pp1
+
+
+
+
+
+
+
+ p1,p2
+
+
+
+ p1,p2
+
+
+
+ p1,p2
+
+
+
+ InputPlaceHolder
+
+
+
+
+
+
+ csv
+
+
+
+
+
diff --git a/tests/framework/PostProcessors/Validation/test_representativity_singlePerfectLinExpToTarget.xml b/tests/framework/PostProcessors/Validation/test_representativity_singlePerfectLinExpToTarget.xml
new file mode 100644
index 0000000000..536256055d
--- /dev/null
+++ b/tests/framework/PostProcessors/Validation/test_representativity_singlePerfectLinExpToTarget.xml
@@ -0,0 +1,141 @@
+
+
+
+ RepresentativityPerfectSingleMeasurable
+ mcRunExp, mcRunTar, PP1
+ 1
+
+
+
+ framework/PostProcessors/Validation/test_validation_representativity2
+ Mohammad Abdo (@Jimmy-INL)
+ 2021-04-29
+ PostProcessors.Validation
+
+ This test assesses the mechanics of the representativity workflow; one of the validation algorithms used in RAVEN.
+ This test a linear model as both the mock experiment and the target plant models. The expected representativity factor should be close to one for each measurable F_i and Figure of merit FOM_i. Currently the test utilizes the bias factor metric to compute the representativity factors. This test includes a single experiment.
+
+
+ Added Modification for new PP API
+
+
+
+
+
+ p1, p2, e1, bE
+ F1
+
+
+ p1, p2, o1, bT
+ FOM1
+
+
+ outputDataMC1|Output|F1
+ outputDataMC2|Output|FOM1
+ outputDataMC1|Input|p1,outputDataMC1|Input|p2
+ outputDataMC2|Input|p1,outputDataMC2|Input|p2
+
+
+
+
+
+ 5.5
+ 0.55
+
+
+ 8
+ 0.8
+
+
+
+
+
+
+ 42
+ 100
+
+
+ dist1
+
+
+ dist2
+
+ 2,-3
+ 0
+
+
+
+ 100
+ 2019
+
+
+ dist1
+
+
+ dist2
+
+ 2,-3
+ 0
+
+
+
+
+
+ inputPlaceHolder2
+ linModel
+ ExperimentMCSampler
+
+
+
+ inputPlaceHolder2
+ tarModel
+ TargetMCSampler
+
+
+
+ outputDataMC1
+ outputDataMC2
+ pp1
+
+
+
+
+
+
+
+ p1,p2
+
+
+
+ p1,p2
+
+
+
+ p1,p2
+
+
+
+ InputPlaceHolder
+
+
+
+
+
+
+ csv
+
+
+
+
diff --git a/tests/framework/PostProcessors/Validation/test_validation_dss.xml b/tests/framework/PostProcessors/Validation/test_validation_dss.xml
index eedbef4a3d..3c49299214 100644
--- a/tests/framework/PostProcessors/Validation/test_validation_dss.xml
+++ b/tests/framework/PostProcessors/Validation/test_validation_dss.xml
@@ -29,8 +29,8 @@
sigma,rho,beta,x2,y2,z2,time2,x0,y0,z0
- outMC1|x1
- outMC2|x2
+ outMC1|x1
+ outMC2|x2dsstime1time2
@@ -39,8 +39,8 @@
1
- outMC1|x1
- outMC2|x2
+ outMC1|x1
+ outMC2|x2dsstime1time2
@@ -51,8 +51,8 @@
1
- outMC1|y1
- outMC2|y2
+ outMC1|y1
+ outMC2|y2dsstime1time2
diff --git a/tests/framework/PostProcessors/Validation/test_validation_gate_pcm.xml b/tests/framework/PostProcessors/Validation/test_validation_gate_pcm.xml
index c7b44a02c9..0ae9481778 100644
--- a/tests/framework/PostProcessors/Validation/test_validation_gate_pcm.xml
+++ b/tests/framework/PostProcessors/Validation/test_validation_gate_pcm.xml
@@ -13,12 +13,12 @@
PostProcessors.Validation.PhysicsGuidedCoverageMapping
This test is aimed to show how PCM works.
- For simplicity, this test is using a linear model
+ For simplicity, this test is using a linear model
as experiment (Feature) and application (Target) models.
The linear model has two input variables and four responses,
all of which (F2, F3, F4) serve as three Targets and (F1, F2) as two Features.
Coordinates of F2 are twice of F1, of F4 are orthorgnal to F1, and of F3 are in between.
- The output is a fraction value reflecting the uncertainty reduction fraction
+ The output is a fraction value reflecting the uncertainty reduction fraction
using Feature to validate Target comparing to the Target prior.
The output name convention is 'pri_post_stdReduct_'+"Target name".
@@ -41,9 +41,9 @@
F1,F2,F3,F4
- outputDataMC1|F1,outputDataMC1|F2
- outputDataMC2|F2,outputDataMC2|F3,outputDataMC2|F4
- msrData|F1,msrData|F2
+ outputDataMC1|Output|F1,outputDataMC1|Output|F2
+ outputDataMC2|Output|F2,outputDataMC2|Output|F3,outputDataMC2|Output|F4
+ msrData|Output|F1,msrData|Output|F2Static
@@ -77,8 +77,8 @@
x2_dist
-
-
+
+
20
@@ -88,8 +88,8 @@
x2_msr_dist
-
-
+
+
@@ -110,7 +110,7 @@
msrMC_msr
-
+
outputDataMC1
outputDataMC2
@@ -129,26 +129,26 @@
x1,x2
-
+
x1,x2
-
+
x1,x2
-
+
x1,x2
-
-
+
+
InputPlaceHolder
-
+ csv
diff --git a/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Snapshot.xml b/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Snapshot.xml
index 7af5690a21..e32a69b4e7 100644
--- a/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Snapshot.xml
+++ b/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Snapshot.xml
@@ -12,14 +12,14 @@
2022-12-05PostProcessors.Validation.PhysicsGuidedCoverageMapping
- This test is aimed to show how snapshot_PCM works.This test is using SETH-C and SETH-D data
+ This test is aimed to show how snapshot_PCM works.This test is using SETH-C and SETH-D data
as experiment (Feature) and application (Target) models.It basically runs a loop of static_PCM.
- In each iteration of the loop,one execution of static_PCM is applied.
+ In each iteration of the loop,one execution of static_PCM is applied.
Here, temperatures from one timestep in SETH-C are used as experiemnt responses (Features);
temperatures from the corresponding timestep in SETH-D are used as application responses (Target)
- The output is a fraction value reflecting the uncertainty reduction fraction
- of Target Posterior comparing to the Target prior,
- which includes uncertainty reductions along timesteps
+ The output is a fraction value reflecting the uncertainty reduction fraction
+ of Target Posterior comparing to the Target prior,
+ which includes uncertainty reductions along timesteps
and has two columns:'time' and 'snapshot_pri_post_stdReduct'.
@@ -31,8 +31,8 @@
time
- exp|TempC
- app|TempD
+ exp|TempC
+ app|TempDmsr|TempMsrCSnapshot
@@ -60,32 +60,32 @@
- time
+ time
-
+
- time
+ time
-
-
+
+
- time
+ time
-
-
+
+
InputPlaceHolder
- time
+ time
-
+ csv
diff --git a/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Static.xml b/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Static.xml
index b11a26a609..c8612ebaeb 100644
--- a/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Static.xml
+++ b/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Static.xml
@@ -13,11 +13,11 @@
PostProcessors.Validation.PhysicsGuidedCoverageMapping
This test is aimed to show how PCM works.
- This test is using SETH-C and SETH-D data
+ This test is using SETH-C and SETH-D data
as experiment (Feature) and application (Target) models.
Here, three timesteps' samples from SETH-C are used as experiment responses (Features),
one timestep's samples from SETH-D are used as application responses (Target)
- The output is a fraction value reflecting the uncertainty reduction fraction
+ The output is a fraction value reflecting the uncertainty reduction fraction
using Feature to validate Target comparing to the Target prior.
The output name convention is 'pri_post_stdReduct_'+"Target name".
@@ -31,8 +31,8 @@
- expData|time20s,expData|time40s,expData|time50s
- appData|time20s
+ expData|time20s,expData|time40s,expData|time50s
+ appData|time20smsrData|time20s,msrData|time40s,msrData|time50sStatic
@@ -59,21 +59,21 @@
-
+
-
-
+
+
-
-
+
+
InputPlaceHolder
-
+ csv
diff --git a/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Tdep.xml b/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Tdep.xml
index 25a7ff60e4..4658563c13 100644
--- a/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Tdep.xml
+++ b/tests/framework/PostProcessors/Validation/test_validation_gate_pcm_Tdep.xml
@@ -12,7 +12,7 @@
2023-04-23PostProcessors.Validation.PhysicsGuidedCoverageMapping
- This test is aimed to show how Tdep_PCM works.This test uses the coefficients of SETH-C and SETH-D data
+ This test is aimed to show how Tdep_PCM works.This test uses the coefficients of SETH-C and SETH-D data
based on their U subspace as experiment (Feature) and application (Target) models.
Here, coefficients of SETH-C are used as experiemnt responses (Features);
coefficients of SETH-C and SETH-D temperatures data are used as application responses (Target)
@@ -28,8 +28,8 @@
time
- exp|TempC
- app|TempD
+ exp|TempC
+ app|TempDmsr|TempMsrCTdep0.001
@@ -58,22 +58,22 @@
- time
+ time
-
+
- time
+ time
-
-
+
+
- time
+ time
-
-
+
+ time
@@ -83,7 +83,7 @@
-
+ csv
diff --git a/tests/framework/PostProcessors/Validation/test_validation_gate_probabilistic.xml b/tests/framework/PostProcessors/Validation/test_validation_gate_probabilistic.xml
index 9a7c63f60b..2e6392391d 100644
--- a/tests/framework/PostProcessors/Validation/test_validation_gate_probabilistic.xml
+++ b/tests/framework/PostProcessors/Validation/test_validation_gate_probabilistic.xml
@@ -26,11 +26,8 @@
x1,x2,ans,ans2
- outputDataMC1|ans
- outputDataMC2|ans2
-
+ outputDataMC1|Output|ans
+ outputDataMC2|Output|ans2cdf_diffpdf_area
diff --git a/tests/framework/PostProcessors/Validation/test_validation_gate_probabilistic_time_dep.xml b/tests/framework/PostProcessors/Validation/test_validation_gate_probabilistic_time_dep.xml
index 9f4aaecca6..181d3a09a8 100644
--- a/tests/framework/PostProcessors/Validation/test_validation_gate_probabilistic_time_dep.xml
+++ b/tests/framework/PostProcessors/Validation/test_validation_gate_probabilistic_time_dep.xml
@@ -32,8 +32,8 @@
- simulation|ans
- experiment|ans2
+ simulation|Output|ans
+ experiment|output|ans2timecdf_diffpdf_area
diff --git a/tests/framework/PostProcessors/Validation/tests b/tests/framework/PostProcessors/Validation/tests
index a669f48bec..c675e8dea5 100644
--- a/tests/framework/PostProcessors/Validation/tests
+++ b/tests/framework/PostProcessors/Validation/tests
@@ -27,6 +27,27 @@
rel_err = 0.00001
zero_threshold = 1e-9
[../]
+ [./test_validation_representativity1]
+ type = 'RavenFramework'
+ input = 'test_representativity_perfectLinExpToTarget.xml'
+ csv = 'RepresentativityPerfectMatch/pp1_metric_dump.csv'
+ rel_err = 0.00001
+ zero_threshold = 1e-9
+ [../]
+ [./test_validation_representativity2]
+ type = 'RavenFramework'
+ input = 'test_representativity_singlePerfectLinExpToTarget.xml'
+ csv = 'RepresentativityPerfectSingleMeasurable/pp1_metric_dump.csv'
+ rel_err = 0.00001
+ zero_threshold = 1e-9
+ [../]
+ [./test_validation_representativity3]
+ type = 'RavenFramework'
+ input = 'test_representativity_rankDifficientLinExpToTarget.xml'
+ csv = 'RepresentativityrankDifficient/pp1_metric_dump.csv'
+ rel_err = 0.00001
+ zero_threshold = 1e-9
+ [../]
[./test_validation_gate_pcm_Static]
type = 'RavenFramework'
input = 'test_validation_gate_pcm_Static.xml'