Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New representativity calculations post processor #2058

Open
wants to merge 366 commits into
base: devel
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 250 commits
Commits
Show all changes
366 commits
Select commit Hold shift + click to select a range
60c2698
Merge commit '957bb81bd5d0e2b066b29a317c74646cef4f095f'
Sep 27, 2017
4e41935
Merge commit '9e05035ff87fbceddca890b184f65678787fffc2'
Oct 3, 2017
eaacebf
Merge commit '6755780df1a6fddaad6f4d4ec9040efc5f0b8069'
Oct 4, 2017
91dc79f
Merge commit 'd317572ffb80f564d27ec54e5db24d763705a8a5'
Oct 5, 2017
06e8d9e
Merge commit '5c0409d9e0ccffa34d663f363da358658318d3c4'
Oct 18, 2017
893a2a5
Merge commit '84d21ee902edb82475a0a452970618afb51a2d94'
Oct 24, 2017
b295b40
Merge commit 'f4b01f079a6254b29ba02765b8fe15fa7669745d'
Oct 26, 2017
7122e30
Merge commit '966a65632255cfee649013fcfc9ddd687cae0311'
Oct 31, 2017
1015191
Merge commit 'b7920de293a51139c1efed9bd591fd93edcc2f67'
Nov 1, 2017
3f99563
Merge commit '2af08d2264b8f03cc01b62fde155807d049f1a76'
Nov 9, 2017
36c487f
Merge commit 'fa37af3df7a3b6a9656d6f9c0c1096515618c9c0'
Nov 15, 2017
fb241c8
Merge commit 'a4cc5bfddec05aeb04615d7c6fa68eb073a71477'
Nov 18, 2017
ee62eaa
Merge commit '75d940507bd7530b7e927537ff444874ef510546'
Nov 23, 2017
b7183a8
Merge commit '152fb5056e523c08c8912341745c44efbcbf1bbe'
Nov 26, 2017
730229d
Merge commit 'd0782a47f31e628a1c1088d5256526d9e6819c98'
Nov 29, 2017
90222cd
Merge commit '4af01fe82a318d538886c4a58d283f9b29f04b1f'
Dec 8, 2017
84a7308
Merge commit 'bf8fd5e1999983f1779c529cd0a065c2abcb8728'
Dec 8, 2017
da83600
Merge commit 'fbee9e3def3c1ee576d1af85f3258cc816ceaaaf'
Jan 10, 2018
92f4af4
Merge commit '31d630598d3aa94e9779d958525cf1efdb0c8a11'
May 15, 2018
9b21c84
Merge commit '232b14c1e1da5be67214a4087e12648d4c05d48b'
May 16, 2018
fb76bbb
Merge commit '688cdd10102b2865f118c6ceeba2ec7a36610b66'
May 17, 2018
6e9c926
Merge commit 'e3d73512b1c3e0b5ebef824872fcbf68d4aaf265'
May 18, 2018
fca0fc5
Merge commit '4091469786d22dec4bda83e9e14d91c6f1ab5b6f'
May 18, 2018
00133e9
Merge commit '63564ba68410c26d555fae333c564a829b2098cc'
May 18, 2018
cf178e7
Merge commit '62a386fc1da99a1d77afb73e2ad19e7e47fd5a72'
May 22, 2018
cfbc990
Merge commit 'fc36f8fb64e523d3780b0ec7f91a4ee49d6034ff'
May 31, 2018
51f7adb
Merge commit 'a552753f9224e5f07e5a5ff917024f86df0d48ce'
Jun 12, 2018
b41ea51
Merge commit '673784335c74113b7518ec774e71b0fd086f9a89'
Jul 16, 2018
0a4d3a0
Merge commit '4603c46e13b77c75d00a4d79fb0e46ec57144623'
Jul 20, 2018
638c9fc
Merge commit '6ff507cf54a15292af1398a8e2419b8d9814598c'
Jul 20, 2018
3a06bc0
Merge commit '08780107503323e6bded3afd9a0e6492734d67c9'
Jul 23, 2018
1cb15bb
Merge commit '7bf9619f4e0cdeffc758d7a7f4a89d22f9f73c0e'
Jul 26, 2018
ad728d5
Merge commit '90e901048eb2a8c7f7cad168dcf7dca9e130e803'
Aug 1, 2018
fd652ed
Merge commit 'dff643ba5ac96426c92dba3ac9ea668898cad0aa'
Aug 1, 2018
edb813f
Merge commit '8a6f6203b40b23af7eef57a484ad4cc152f0a18e'
Aug 3, 2018
e52eb18
Merge commit 'a6d2629f7a045dd0b46e270086689fdc114c74f6'
Aug 3, 2018
b919e3b
Merge commit '3c202ee4594d8aeb61633783af77935bc91092d4'
Aug 8, 2018
637bac0
Merge commit '02ba1328002e0c2ced55487fd5c871e78793d8cd'
Aug 9, 2018
ef014bd
Merge commit '81cca1eb7c21f38912ed47ff16b43ca56379c814'
Aug 9, 2018
27c865f
Merge commit 'ff22bb1d0b561cd613d16de88237b2f615b07380'
Aug 26, 2018
bdbbfc8
Merge commit '66cb533420e0e3763418a42a310f88993f8ae78f'
Aug 26, 2018
1a59991
Merge commit 'b8cbae6b8e3dd9b45f47fedd00d8d9f9496ba9a8'
Aug 27, 2018
234e503
Merge commit '9e8e8edf021fe7ce7f24ca55d1ccf2db51b6135e'
Aug 27, 2018
9ac5adb
Merge commit '09e47cabaf09c32d402e8527d3804bac5f118ccf'
Sep 4, 2018
27e74d1
Merge commit '67ff4e446ba5b03a9a1fd5440e423675196b3486'
Sep 6, 2018
6c0dab1
Merge commit 'd42b144cf5072bd2af7a4629ad858891de57f751'
Sep 6, 2018
bc430a7
Merge commit '2e456ca5d2a72b4ad63f2d3252575f2f36a9bd3e'
Sep 6, 2018
a52797d
Merge commit 'ac6f522f6385a2fa7f900247c396f0d6cfccf30d'
Sep 8, 2018
a385f7a
Merge commit 'da4ec50fb71da108a5f067665ed243ed3ba9c2b0'
Sep 14, 2018
3d05b6f
Merge commit '5f29fe81b75e2ffbeb54a55aa63647e7b2f6457b'
Sep 20, 2018
da66d51
updated submodule moose
alfoa Sep 25, 2018
74963b4
Revert "updated submodule moose"
alfoa Sep 25, 2018
8a04eb1
Merge commit 'a96f4be1e31bf296a18eff2eac583f959ea77406'
Sep 27, 2018
2571447
Merge commit 'faf4cd5945c86a5c7ffcb9429431afb0fb64b17a'
Oct 18, 2018
aa745a2
Merge commit 'df5c80ab563785f4308c5fb376a6179f4e34b490'
Oct 18, 2018
74b4aec
Merge commit 'df5efa535fe88e03f00fd32c7b476bfece08e43a'
Oct 19, 2018
3c62067
Merge commit 'e6a78e037becd2d21bba40ef9dd5e67568dfac78'
Oct 19, 2018
d3b2d86
Merge commit '9d7b56399ac235e1d53725c22f09c9556de0a788'
Oct 29, 2018
cd29182
Merge commit '04c950978bb73845d5d033ec470becacad6dcb3b'
Oct 29, 2018
ecfc83a
Merge commit '2db73e92991e23d3bb8c0a58c4fbbaf2527dc0cc'
Nov 1, 2018
a26f2b5
Merge commit '2bf86b2b414eeada5d2f91a2cbe7041aa346fde7'
Nov 2, 2018
a065eb1
Merge commit 'b33b914213c3b2cf97c62b807353f9e008d726ae'
Nov 2, 2018
23b0233
Merge commit '9742628ebc8e7a68e1c1c7c543bc2940d5e36362'
Nov 5, 2018
ba310c6
Merge commit '2ac220ffb34d2dfb5c8fb1aeea5b0398e1e85560'
Nov 5, 2018
81193a4
Merge commit 'd230c80dc22b73cc207f34d8093ba33eeac8657b'
Nov 5, 2018
e884921
Merge commit 'f8431ba87cfe4e5e285e70ed9e81b21e1b34d719'
Nov 5, 2018
9fd97f9
Merge commit '666978e8546d1f948b2ad55a4c3b0fce5cc8533c'
Nov 7, 2018
5935b37
Merge commit '5bcb86950a913d00e9aacfe691674e9e0b5519b5'
Nov 8, 2018
df17bd8
Merge commit 'e90d3acc18098ee4ef4ced0c691387597d9667f0'
Nov 8, 2018
e6278c8
Merge commit 'a87baededaaf4ceb73ec05876f610a3aa8a06dfc'
Nov 12, 2018
a4ee9b1
Merge commit 'cb8ac066b553219cb0dfe28e3baaf5ccb7508ff6'
Nov 15, 2018
4aae9c2
Merge commit '00b6abb2485fe1f1c12ae454d6d1046e5e98768c'
Nov 15, 2018
be21ac4
Merge commit 'e56324cf9032347e49c36d4030f74f36beed725a'
Nov 15, 2018
8b71716
Merge commit '82ce63a126a73948bbbb1d7af0e3d63d76d96f0a'
Nov 16, 2018
3d54347
Merge commit 'f31a0a4bae9ff459aaec7a35b28c65bf51a55963'
Nov 16, 2018
e6d3405
Merge commit 'ba2343337b96c71c7e1a6c67fe2328051cb4463c'
Nov 16, 2018
9343309
Merge commit 'c0cddb51b37fd5f9836ab13d6861baeb1d20346f'
Nov 16, 2018
63a5f63
Merge commit '9aa922f7d279c32869dcbfe1439fc6728bde8383'
Nov 30, 2018
300c62b
Merge commit 'a6977d05ad01cee7bb12ad9d5d4d0ecf0a8e7bd7'
Dec 5, 2018
5860c60
Merge commit '197d5567ca7220bfb60ccb25c10b4654212e218b'
Dec 10, 2018
c5918b6
Merge commit 'e1eae85628570704e4d40724552832ff0e6f6d01'
Dec 13, 2018
d4734a4
Merge commit '4d59fc07802d2a84cc59a5cf14935053a5049474'
Dec 14, 2018
7434b15
Merge commit '464cc9d01a905e7331043dd12b099f01b1e39450'
Dec 18, 2018
eb4987e
Merge commit 'f764c48ab4333db30986787716a3dc9a23f6e745'
Dec 19, 2018
ded41e5
Merge commit 'e679d1050ba44bc4a7e7e489f11b0ce363a55f37'
Jan 7, 2019
d53bd8e
Merge commit '2a0203b49c27740da07ea7b9c1913a5c2642cf12'
Jan 8, 2019
a18df7e
Merge commit '6ae167b4a4689d8dcbbdadaab635acc3a93f1d1e'
Jan 11, 2019
f89455d
Merge commit '838354e5fba3fd04b437cefe3f612da8725219e4'
Jan 15, 2019
f11c3b0
Merge commit '7e5b1844dd309b38eb6923279af60083d242d28b'
Jan 21, 2019
55d3686
Merge commit '2ba117ad3863fa7d12d92f278fdae96daafc002c'
Jan 23, 2019
5344a41
Merge commit '2ef614dcb0d503cf5c2a1a7db76412375d7cb09c'
Jan 24, 2019
d8c4e84
Merge commit '5d61e1861042ac99efa6b89d561b56ce54fdc830'
Jan 25, 2019
b0d2f52
Merge commit 'da39d9f66ac180ffb9f0175da63d159e055a519e'
Jan 29, 2019
06ac3f0
Merge commit '9a0e0b4a450fe0f2f7118182adab6a47dd241964'
Jan 31, 2019
eddd98c
Merge commit '88c3c825d0dabd9c7a9136f088db33c4d6556308'
Jan 31, 2019
ad3327d
Merge commit '5e135912986e711a1191e20076b5f142e5d2e39f'
Jan 31, 2019
97b804c
Merge commit '21a766cca817906c1f066ef51b6379823a29ae14'
Feb 6, 2019
1713ef2
Merge commit 'b179f4bb7ea09f64148b4adf48bf07794dc5c6d8'
Feb 9, 2019
dff236b
Merge commit 'c86fba6f07bd7d1d415baca35ca83cf129b74d0d'
Feb 26, 2019
85f9ca2
Merge commit '49ee6fba865fe6a26b1dd4d5738b98f57efa142d'
Feb 28, 2019
ef6507a
Merge commit '2012ade30f50e99cf28100a2a1e4c5478ae46954'
Mar 2, 2019
89efcba
Merge commit '57ac28dc6505d0ac2fd0c50eeb6caf4a52c220b9'
Mar 3, 2019
fb4d82a
Merge commit '4165a083276c2aa42b512e6b4cbcb66ef805ba2d'
Mar 6, 2019
b6d5f89
Merge commit '389a15d3260627b3aa15fbc82ed3dbcc2f10646b'
Mar 8, 2019
c4318bd
Merge commit '8723d11c9513b2eb5702a2ab6c37ecd8c6d94923'
Mar 11, 2019
f23adfd
Merge commit '2e1228c4f6bc38eccbe82771c82305ee3d37f1ba'
Mar 18, 2019
b24af4c
Merge commit '2964e52cad089b07d1d82a5da68d0ae92fa9d697'
Mar 26, 2019
9303eab
Merge commit 'c5191d09c0f77f560520370b969a2703dbf65d15'
Mar 27, 2019
7b47047
Merge commit 'da3551f16a523c90610da3f59dde773a23619e6a'
Mar 27, 2019
1c720ff
Merge commit 'fe74929a53d32a30e2061cf7295754ebd58a4158'
Mar 27, 2019
dbb0ac1
Merge commit '9f3cb3d30c0766a4f7473fac4e6b77cac74fe869'
Mar 27, 2019
f772a1b
Merge commit '39e6650285db68185a377b9d9944c0b09b4dd51c'
Mar 27, 2019
709d6ee
Merge commit '064e7d8670fa1365f2b60bcf73b9845745e7e362'
Apr 8, 2019
eb9082f
Merge commit '73a7e643893ee18d63aa62eca218c907e006e865'
Apr 9, 2019
8587706
Merge commit 'a493f31ff2a0514a2e19add5b01736d21cc7dec7'
Apr 16, 2019
b33e09b
Merge commit '8a807cb1b5e101b9e93e5232caad004d3d0e2254'
Apr 19, 2019
a90360b
Merge commit '5c2c68957b486a5491045e775d2543f1f241f946'
May 1, 2019
fb40d49
Merge commit '94b1adf9a8b70d66c3d775f44f3c6c9389543406'
May 2, 2019
9666d56
Merge commit 'c493b48a9b9be1d4ed62895040407eb1781056f0'
May 2, 2019
b0eb148
Merge commit 'b7f3f4cccdb2f26a6d78b474a317bfcf4e37422f'
May 3, 2019
e7c9154
Merge commit '4b232197adb3f9dc14785fc4677d45ba7860e3d0'
May 5, 2019
c694c32
Merge commit '74a184c38a9d44a6c861ef854fc0fb4930262fdf'
May 7, 2019
02fa320
Merge commit '2dc37ab865c5611a5830f36fbe596f6044c50635'
May 8, 2019
69bf84e
Merge commit '67456d764b3f5ff1c2c6657996c4d82e6c4170ba'
May 9, 2019
5cbede2
Merge commit '69f066fdeac69f0d64fd0fb5f4dfa7a3dfa96e53'
May 10, 2019
1977bf6
Merge commit '9849909132d1d56cb0e57583d96d39416a8da0e4'
May 10, 2019
b06e178
Merge commit 'cc4ffec817c01cf4399e60359da7b1fc7047918b'
Jun 3, 2019
e525730
Merge commit '785fe9c8fd8ad1f679de4b7d82dc51224ff2a7e0'
Jun 24, 2019
aa30de3
Merge commit '6cc806b889bbcd4b2a858c049da240dbe2427e59'
Jun 24, 2019
24c9215
Merge commit '904a61ccb36f00b43d1cad390d6b3ce8479de49b'
Jun 24, 2019
dc16803
Merge commit '2949d93521a0aa6484e4a72e3b099f9f87d0ea12'
Jun 25, 2019
5cab996
Merge commit '75cd2bbd0102bfc303b475fc23630056ab12ce0e'
Jun 26, 2019
1806889
Merge branch 'devel' of https://github.com/idaholab/raven into devel
Jimmy-INL Sep 23, 2019
9cd1b03
Merge remote-tracking branch 'upstream/devel' into devel
Jimmy-INL Nov 20, 2019
2678917
Merge branch 'devel' of https://github.com/idaholab/raven into devel
Jimmy-INL Jan 7, 2020
5da7950
Few changes to probabilistic.py and representativity.py
Jimmy-INL Jul 24, 2021
01a87fb
Merge branch 'devel' into Jimmy-Congjian-Representativity
Jimmy-INL Sep 8, 2021
bbef143
Switching to PostProcessorReadyInterface
joshua-cogliati-inl Nov 23, 2021
4c280cc
Merge remote-tracking branch 'josh/cogljj/bs2' into Jimmy-testingRepr…
Jimmy-INL Nov 30, 2021
73cf876
fixind factories and __init__
Jimmy-INL Nov 30, 2021
fb773e8
adding the representativity again
Jimmy-INL Dec 2, 2021
edf8fcc
Fixing passing data.
joshua-cogliati-inl Dec 2, 2021
d6f9038
first attept to fix representativity
Jimmy-INL Dec 3, 2021
d5fe3e4
adding features to the BS call
Jimmy-INL Dec 9, 2021
9d22667
slowly adding differences from Josh's pathch
Jimmy-INL Jan 26, 2022
56b4401
Adding test
Jimmy-INL Mar 2, 2022
d605ee0
Merge branch 'devel' of https://github.com/idaholab/raven into devel
Jimmy-INL Mar 31, 2022
66faee7
returning xr temperarly with dict
Jimmy-INL Apr 5, 2022
19cd707
removing unnecessay imports for python3
Jimmy-INL Apr 6, 2022
6b81b9e
adding representativity to user_manual
Jimmy-INL Apr 13, 2022
ed2dd4f
removing unnecessary commented lines
Jimmy-INL Apr 13, 2022
d780bda
Updating test description
Jimmy-INL Apr 13, 2022
8203734
more cleaning
Jimmy-INL Apr 13, 2022
9950169
adding some descriptions to the inputs
Jimmy-INL Apr 13, 2022
9ed4c36
removing old methods
Jimmy-INL Apr 13, 2022
e1d4868
camelBack
Jimmy-INL Apr 13, 2022
d5cc347
cleaning test
Jimmy-INL Apr 13, 2022
f59508b
trying to fix imports
Jimmy-INL Apr 13, 2022
6df88b6
Merge branch 'devel' into wangc/representativity
wangcj05 Apr 26, 2022
efb48e7
remove changes from plugins
wangcj05 Apr 26, 2022
3d2b542
update basic stats pp
wangcj05 Apr 26, 2022
81659af
update and clean up Basic Statistics PP
wangcj05 Apr 27, 2022
194f282
update representativity
wangcj05 Apr 27, 2022
3776f12
update representativity
wangcj05 Apr 27, 2022
223b590
Merge branch 'wangc/representativity' of https://github.com/wangcj05/…
Jimmy-INL Apr 28, 2022
9b12b89
adding linModel.py
Jimmy-INL Apr 28, 2022
5a09647
changes to the metric
Jimmy-INL Apr 28, 2022
9f503a0
updating linear representativity test
Jimmy-INL May 16, 2022
b373759
updating from upstream
Jimmy-INL May 16, 2022
a39860d
pushing test_linModel
Jimmy-INL May 16, 2022
2ecbf88
resolving conflicts
Jimmy-INL May 16, 2022
9e0c5e0
adding helper functions and replicationg metric in order to remove it…
Jimmy-INL May 24, 2022
51175d8
Merge remote-tracking branch 'upstream/devel' into Jimmy-Congjian-Rep…
Jimmy-INL May 26, 2022
21408db
removing the metric gradually
Jimmy-INL May 26, 2022
289f026
changing required metrics to zero_to_infinity
Jimmy-INL May 26, 2022
06e2146
adding initial representativity _evaluate function
Jimmy-INL May 31, 2022
cdf7811
adding pysensors to the dependencies
Jimmy-INL May 31, 2022
2e7670a
Adding reduced covariance (corrected Uncertainty)
Jimmy-INL Jun 7, 2022
f5696b5
reverting dependencies.xml
Jimmy-INL Jun 8, 2022
4964065
reporting more out puts
Jimmy-INL Jun 13, 2022
0b09fe7
documenting outs
Jimmy-INL Jun 14, 2022
0f0ad01
adding two different models for experiment and target
Jimmy-INL Jun 21, 2022
570036b
updating scipy
Jimmy-INL Aug 31, 2022
52d3ded
deleting representativity Lin test
Jimmy-INL Sep 1, 2022
dc19f37
Fixing an issue in BS that affected the test /tests/framework/user_gu…
Jimmy-INL Sep 7, 2022
df43985
adding the old dependencies
Jimmy-INL Sep 7, 2022
34a0c05
updating dependencies
Jimmy-INL Sep 7, 2022
751902d
Merge branch 'devel' into Jimmy-Congjian-Representativity
Jimmy-INL Sep 7, 2022
d0afbdf
addressing some of Congjian's comments about the manual, and dockstrings
Jimmy-INL Sep 29, 2022
3c7c56f
few changes to the manual
Jimmy-INL Oct 4, 2022
cdd6c51
updating dependencies
Jimmy-INL Jan 25, 2023
573692f
adding tests and golded files
Jimmy-INL Feb 2, 2023
98c115f
addressing few comments from wang
Jimmy-INL Feb 5, 2023
b172882
addressing some manual comments from wang
Jimmy-INL Feb 5, 2023
0d532f4
removing duplicated Representativity.py
Jimmy-INL Feb 6, 2023
e480c12
Removed old Representativity.py
Jimmy-INL Feb 6, 2023
e5589b8
Merge remote-tracking branch 'origin/Jimmy-Congjian-Representativity'…
Jimmy-INL Feb 6, 2023
2056274
deleting duplicated representativity.py
Jimmy-INL Feb 7, 2023
10c719d
deleting unnecessary metric
Jimmy-INL Feb 7, 2023
eb0fde7
modifying dockstring
Jimmy-INL Feb 7, 2023
d7297bc
adding tests
Jimmy-INL Feb 10, 2023
fa437b4
Merge branch 'devel' into newRep-Congjian
Jimmy-INL Feb 11, 2023
211e54b
updating dependencies
Jimmy-INL Feb 11, 2023
194c010
Merge branch 'newRep-Congjian' of github.com:Jimmy-INL/raven into new…
Jimmy-INL Feb 11, 2023
44214f6
adding analytic models
Jimmy-INL Feb 15, 2023
34d2d8c
updating dependencies with devel
Jimmy-INL Feb 20, 2023
bcf7b77
Delete dependencies_new.xml
Jimmy-INL Feb 20, 2023
bac01e9
adding docstrings to external models
Jimmy-INL Feb 21, 2023
74c7e53
addressing Congjian's comments about metric occurance
Jimmy-INL Mar 22, 2023
cadd061
clarifications to validation.tex
Jimmy-INL Mar 23, 2023
bc4261f
adding the format DS|Input/Output|name
Jimmy-INL Mar 29, 2023
eeec9da
changing the names of input nodes
Jimmy-INL Mar 31, 2023
b596c3c
add initSeed and regold
wangcj05 Apr 11, 2023
408940d
add dedug info
wangcj05 Apr 11, 2023
09f7608
adding rtol to the pseudo inverse and negelecting correlation in cova…
Jimmy-INL May 7, 2023
f5be7f7
Merge branch 'newRep-Congjian' of github.com:Jimmy-INL/raven into new…
Jimmy-INL May 7, 2023
493cd25
regolding
Jimmy-INL May 7, 2023
415da98
updating dependencies
Jimmy-INL May 7, 2023
3e8850c
Merge branch 'devel' into newRep-Congjian
Jimmy-INL May 7, 2023
2940523
removing unnecessary tests
Jimmy-INL May 7, 2023
598671e
Merge branch 'newRep-Congjian' of github.com:Jimmy-INL/raven into new…
Jimmy-INL May 7, 2023
296a4da
updating manual
Jimmy-INL May 12, 2023
9675884
regolding and checking rank dificient covariance matrices
Jimmy-INL May 15, 2023
a77cdb6
enhancing poorly written functions
Jimmy-INL May 18, 2023
dd91167
changing self.sampleTag to directly assign to "RAVEN_sample_ID"
Jimmy-INL Jun 5, 2023
1f87964
making manual modifications for DSS
Jimmy-INL Jun 5, 2023
6f0d9dd
updating linExpModel
Jimmy-INL Jun 5, 2023
be6e811
updating ExpModel and TarModel
Jimmy-INL Jun 5, 2023
0fd1ddb
enhancing tests description
Jimmy-INL Jun 5, 2023
7d14a7c
Merge branch 'devel' into newRep-Congjian
Jimmy-INL Jun 5, 2023
c504f74
add sampleTag for LimitSurface and SafestPoint PP
wangcj05 Jun 7, 2023
38539ae
Merge branch 'devel' of github.com:idaholab/raven into newRep-Congjian
Jimmy-INL Jun 13, 2023
f8f82ff
Merge branch 'devel' into newRep-Congjian
wangcj05 Sep 21, 2023
abf38df
fix issues in pcm pp
wangcj05 Sep 21, 2023
57b4678
fix issues when retrieve variables from data object
wangcj05 Sep 21, 2023
8fe2c47
some fix, still need to update self.stat.inputToInternal
wangcj05 Sep 21, 2023
5a92e6b
fix SubdomainBasicStatistics
wangcj05 Sep 21, 2023
80bfa38
fix validation base
wangcj05 Sep 21, 2023
91215f0
update manual
wangcj05 Sep 21, 2023
8c271cb
modifying rquations 71,74,79
Jimmy-INL Sep 28, 2023
1d33eaa
fix equation 71 and 74
wangcj05 Sep 28, 2023
28ba71c
Merge branch 'newRep-Congjian' of github.com:Jimmy-INL/raven into new…
Jimmy-INL Sep 29, 2023
5c29aec
address comments
wangcj05 Oct 17, 2023
396dbc5
delay import factory to allow definition
wangcj05 Oct 17, 2023
87cc738
Merge branch 'devel' into newRep-Congjian
wangcj05 Oct 17, 2023
645644e
revert changes
wangcj05 Oct 17, 2023
50fe87a
Merge branch 'newRep-Congjian' of github.com:Jimmy-INL/raven into new…
Jimmy-INL Oct 24, 2023
6d7d77b
addressing Joshs comment of returning a different variable instead of…
Jimmy-INL Nov 6, 2023
548f9f7
fix docs
wangcj05 Nov 6, 2023
4ca3eb7
Merge branch 'devel' into newRep-Congjian
wangcj05 Sep 19, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 86 additions & 4 deletions doc/user_manual/PostProcessors/Validation.tex
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ \subsubsection{Validation PostProcessors}
\begin{itemize}
\item \textbf{Probabilistic}, using probabilistic method for validation, can be used for both static and time-dependent problems.
\item \textbf{PPDSS}, using dynamic system scaling method for validation, can only be used for time-dependent problems.
% \item \textbf{Representativity}
\item \textbf{Representativity}, using represntativity (bias) factor for validation, currently, can be used for static data.
\item \textbf{PCM}, using Physics-guided Coverage Mapping method for validation, can only be used for static problems.
\end{itemize}
%

The choices of the available metrics and acceptable data objects are specified in table \ref{tab:ValidationAlgorithms}.
The choices of the available metrics and acceptable data objects are specified in table~\ref{tab:ValidationAlgorithms}.

\begin{table}[]
\caption{Validation Algorithms and respective available metrics and DataObjects}
Expand All @@ -23,6 +23,7 @@ \subsubsection{Validation PostProcessors}
\hline
\textbf{Validation Algorithm} & \textbf{DataObject} & \textbf{Available Metrics} \\ \hline
Probabilistic & \begin{tabular}[c]{@{}c@{}}PointSet \\ HistorySet\end{tabular} & \begin{tabular}[c]{@{}c@{}}CDFAreaDifference\\ \\ PDFCommonArea\end{tabular} \\ \hline
Representativity & \begin{tabular}[c]{@{}c@{}}PointSet \\ HistorySet\end{tabular} & \begin{tabular}[c]{@{}c@{}}\end{tabular} \\ \hline
PPDSS & HistorySet & DSS \\ \hline
PCM & PointSet & (not applicable) \\ \hline
\end{tabular}
Expand Down Expand Up @@ -105,7 +106,7 @@ \subsubsection{Validation PostProcessors}
\item \xmlAttr{type}, \xmlDesc{required string attribute}, the sub-type of this Metric (e.g., SKL, Minkowski)
\end{itemize}
\nb The choice of the available metric is \xmlString{DSS}, please
refer to \ref{sec:Metrics} for detailed descriptions about this metric.
refer to~\ref{sec:Metrics} for detailed descriptions about this metric.
\item \xmlNode{pivotParameterFeature}, \xmlDesc{string, required field}, specifies the pivotParameter for a feature <HistorySet>. The feature pivot parameter is the shared index of the output variables in the data object.
\item \xmlNode{pivotParameterTarget}, \xmlDesc{string, required field}, specifies the pivotParameter for a target <HistorySet>. The target pivot parameter is the shared index of the output variables in the data object.
\item \xmlNode{separateFeatureData}, \xmlDesc{string, optional field}, specifies the custom feature interval to apply DSS postprocessing. The string should contain three parts; start time, `|', and end time all in one. For example, 0.0|0.5.
Expand Down Expand Up @@ -247,7 +248,7 @@ \subsubsection{Validation PostProcessors}
number of measurements should be equal to the number of features and in the same order as the features listed in \xmlNode{Features}.
\end{itemize}

The output of PCM is comma separated list of strings in the format of ``pri\textunderscore post\textunderscore stdReduct\textunderscore [targetName]'',
The output of PCM is comma separated list of strings in the format of ``pri\textunderscore post\textunderscore stdReduct\textunderscore [targetName]'',
where [targetName] is the $VariableName$ specified in DataObject of \xmlNode{Targets}.


Expand All @@ -267,3 +268,84 @@ \subsubsection{Validation PostProcessors}
...
<Simulation>
\end{lstlisting}


\paragraph{Representativity}
The \textbf{Representativity} post-processor is one of three \textbf{Validation} post-processors, in fact there is a
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a period after "post-processors"? "in fact there is" is a strange statement after a comma.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This description also seems a little focused on a software engineer's understanding of this interface. Can we make it more user-centric?

post-processor interface that acts as a gate for applying these validation algorithms
(i.e., representativity, Physics-guided Convergence Mapping (PCM), and Dynamic System Scaling (DSS)).
The post-processor is in charge of deploying a common infrastructure for the user of \textbf{Validation} problems.
The representativity theory was first founded in the Neutronics community~\cite{Gandini, palmiotti1, palmiotti2}, then lately, was transformed to the thermal hydraulics~\cite{Epiney1, Epiney2}.

%
\ppType{Representativity}{Representativity}
%

\begin{itemize}
\item \xmlNode{Features}, \xmlDesc{comma separated string, required field}, specifies the names of the features, which can be the measuables/observables of the mock model. Reader should be warned that this nomenclature is different than the machine learning nomenclature.

\item \xmlNode{Targets}, \xmlDesc{comma separated string, required field}, contains a comma separated list of
targets. These are the Figures of merit (FOMs) in the target model against which the mock model is being validated.

\item \xmlNode{featureParameters}, \xmlDesc{comma separated string, required field}, specifies the names of the parameters/inputs to the mock/prototype model.

\item \xmlNode{targetParameters}, \xmlDesc{comma separated string, required field}, specifies the names of the parameters/inputs to the target model.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed, it is better to change these names to more meaning names.


\item \xmlNode{pivotParameter}, \xmlDesc{string, optional field}, ID of the temporal variable of the mock model. Default is ``time''.
\nb Used just in case the \xmlNode{pivotValue}-based operation is requested (i.e., time dependent validation).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe instead of "just in case" we say "Used in the event of time-series validation" or similar. Same with the next line.

\item \xmlNode{targetPivotParameter}, \xmlDesc{string, optional field}, ID of the temporal variable in the target model. Default is ``time''.
\nb Used just in case the \xmlNode{pivotValue}-based operation is requested (i.e., time dependent validation).
Indeed, Both \textbf{PointSet} and \textbf{HistorySet} can be accepted by this post-processor.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Indeed" is a strange term to use here; I think it can be rephrased like "This allows both PointSet and HistorySet inputs to this postprocessor", or just "indeed" can be removed.

Also, is this accurate, or does it take DataSets as well?

If the name of given variable to be compared is unique, it can be used directly, otherwise the variable can be specified
with $DataObjectName|InputOrOutput|VariableName$ nomenclature.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems users do not need to provide InputOrOutput.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Congjian on this one.


Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove the redundant empty lines in this next section?

\textbf{Example:}
\begin{lstlisting}[style=XML,morekeywords={subType}]
<Simulation>
...
<Steps>
<!--Multirun the prototype model-->
<MultiRun name="mcRunExp" re-seeding="20021986">
<Input class="DataObjects" type="PointSet">inputPlaceHolder2</Input>
<Model class="Models" type="ExternalModel">linModel</Model>
<Sampler class="Samplers" type="MonteCarlo">ExperimentMCSampler</Sampler>
<Output class="DataObjects" type="PointSet">outputDataMC1</Output>
</MultiRun>
<!--Multirun the target model-->
<MultiRun name="mcRunTar" re-seeding="68912002">
<Input class="DataObjects" type="PointSet">inputPlaceHolder2</Input>
<Model class="Models" type="ExternalModel">tarModel</Model>
<Sampler class="Samplers" type="MonteCarlo">TargetMCSampler</Sampler>
<Output class="DataObjects" type="PointSet">outputDataMC2</Output>
</MultiRun>
<!--Create the Representativity PostProcessor-->
<PostProcess name="PP1">
<Input class="DataObjects" type="PointSet">outputDataMC1</Input>
<Input class="DataObjects" type="PointSet">outputDataMC2</Input>
<Model class="Models" type="PostProcessor">pp1</Model>
<Output class="DataObjects" type="PointSet">pp1_metric</Output>
<Output class="OutStreams" type="Print">pp1_metric_dump</Output>
</PostProcess>
</Steps>
...
<Models>
<ExternalModel ModuleToLoad="../../../AnalyticModels/expLinModel.py" name="linModel" subType="">
<inputs>p1, p2, e1, e2, e3, bE</inputs>
<outputs>F1, F2, F3</outputs>
</ExternalModel>
<ExternalModel ModuleToLoad="../../../AnalyticModels/tarLinModel.py" name="tarModel" subType="">
<inputs>p1, p2, o1, o2, o3, bT</inputs>
<outputs>FOM1, FOM2, FOM3</outputs>
</ExternalModel>
<PostProcessor name="pp1" subType="Representativity">
<Features>outputDataMC1|F1, outputDataMC1|F2, outputDataMC1|F3</Features>
<Targets>outputDataMC2|FOM1, outputDataMC2|FOM2, outputDataMC2|FOM3</Targets><!---->
<featureParameters>outputDataMC1|p1,outputDataMC1|p2</featureParameters>
<targetParameters>outputDataMC2|p1,outputDataMC2|p2</targetParameters>
<pivotParameter>outputDataMC1|time</pivotParameter>
<targetPivotParameter>outputDataMC2|time</targetPivotParameter>
</PostProcessor>
</Models>
...
<Simulation>
\end{lstlisting}
46 changes: 46 additions & 0 deletions doc/user_manual/raven_user_manual.bib
Original file line number Diff line number Diff line change
Expand Up @@ -112,3 +112,49 @@ @TechReport{RAVENtheoryManual
year = {2016},
key = {INL/EXT-16-38178}
}

@book{Gandini,
title={Uncertainty analysis and experimental data transposition methods based on perturbation theory},
author={Gandini, A},
journal={Uncertainty Analysis},
pages={217--258},
year={1988},
publisher={CRC Press, Boca Raton, Fla, USA}
}

@article{palmiotti1,
title={A global approach to the physics validation of simulation codes for future nuclear systems},
author={Palmiotti, Giuseppe and Salvatores, Massimo and Aliberti, Gerardo and Hiruta, Hikarui and McKnight, R and Oblozinsky, P and Yang, WS},
journal={Annals of Nuclear Energy},
volume={36},
number={3},
pages={355--361},
year={2009},
publisher={Elsevier}
}

@article{palmiotti2,
title={The role of experiments and of sensitivity analysis in simulation validation strategies with emphasis on reactor physics},
author={Palmiotti, Giuseppe and Salvatores, Massimo},
journal={Annals of Nuclear Energy},
volume={52},
pages={10--21},
year={2013},
publisher={Elsevier}
}

@article{Epiney1,
title={A Systematic Approach to Inform Experiment Design Through Modern Modeling and Simulation Methods},
author={Epiney, A and Rabiti, C and Davis, C},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this reference does not indicate if it is a journal of conf proceeding

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

year={2019}
}

@inproceedings{Epiney2,
title={Representativity Analysis Applied to TREAT Water Loop LOCA Experiment Design},
author={Epiney, Aaron S and Woolstenhulme, Nicolas},
booktitle={International Conference on Nuclear Engineering},
volume={83785},
pages={V003T13A055},
year={2020},
organization={American Society of Mechanical Engineers}
}
105 changes: 40 additions & 65 deletions ravenframework/Models/PostProcessors/BasicStatistics.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,14 +27,14 @@
#External Modules End-----------------------------------------------------------

#Internal Modules---------------------------------------------------------------
from .PostProcessorInterface import PostProcessorInterface
from .PostProcessorReadyInterface import PostProcessorReadyInterface
from ...utils import utils
from ...utils import InputData, InputTypes
from ...utils import mathUtils
from ... import Files
#Internal Modules End-----------------------------------------------------------

class BasicStatistics(PostProcessorInterface):
class BasicStatistics(PostProcessorReadyInterface):
"""
BasicStatistics filter class. It computes all the most popular statistics
"""
Expand Down Expand Up @@ -164,92 +164,47 @@ def __init__(self):
self.sampleSize = None # number of sample size
self.calculations = {}
self.validDataType = ['PointSet', 'HistorySet', 'DataSet'] # The list of accepted types of DataObject
self.inputDataObjectName = None # name for input data object
self.setInputDataType('xrDataset')

def inputToInternal(self, currentInp):
def inputToInternal(self, inputIn):
"""
Method to convert an input object into the internal format that is
Method to select corresponding data from Data Objects and normalize the ProbabilityWeight of corresponding data
understandable by this pp.
@ In, currentInp, object, an object that needs to be converted
@ In, inputIn, dict, a dictionary that contains the input Data Object information
@ Out, (inputDataset, pbWeights), tuple, the dataset of inputs and the corresponding variable probability weight
"""
# The BasicStatistics postprocessor only accept DataObjects
self.dynamic = False
currentInput = currentInp [-1] if type(currentInp) == list else currentInp
if len(currentInput) == 0:
self.raiseAnError(IOError, "In post-processor " +self.name+" the input "+currentInput.name+" is empty.")

inpVars, outVars, dataSet = inputIn['Data'][0]
pbWeights = None
if type(currentInput).__name__ == 'tuple':
return currentInput
# TODO: convert dict to dataset, I think this will be removed when DataSet is used by other entities that
# are currently using this Basic Statisitics PostProcessor.
if type(currentInput).__name__ == 'dict':
if 'targets' not in currentInput.keys():
self.raiseAnError(IOError, 'Did not find targets in the input dictionary')
inputDataset = xr.Dataset()
for var, val in currentInput['targets'].items():
inputDataset[var] = val
if 'metadata' in currentInput.keys():
metadata = currentInput['metadata']
self.pbPresent = True if 'ProbabilityWeight' in metadata else False
if self.pbPresent:
pbWeights = xr.Dataset()
self.realizationWeight = xr.Dataset()
self.realizationWeight['ProbabilityWeight'] = metadata['ProbabilityWeight']/metadata['ProbabilityWeight'].sum()
for target in self.parameters['targets']:
pbName = 'ProbabilityWeight-' + target
if pbName in metadata:
pbWeights[target] = metadata[pbName]/metadata[pbName].sum()
elif self.pbPresent:
pbWeights[target] = self.realizationWeight['ProbabilityWeight']
else:
self.raiseAWarning('BasicStatistics postprocessor did not detect ProbabilityWeights! Assuming unit weights instead...')
else:
self.raiseAWarning('BasicStatistics postprocessor did not detect ProbabilityWeights! Assuming unit weights instead...')
if 'RAVEN_sample_ID' not in inputDataset.sizes.keys():
self.raiseAWarning('BasicStatisitics postprocessor did not detect RAVEN_sample_ID! Assuming the first dimension of given data...')
self.sampleTag = utils.first(inputDataset.sizes.keys())
return inputDataset, pbWeights

if currentInput.type not in ['PointSet','HistorySet']:
self.raiseAnError(IOError, self, 'BasicStatistics postprocessor accepts PointSet and HistorySet only! Got ' + currentInput.type)

# extract all required data from input DataObjects, an input dataset is constructed
dataSet = currentInput.asDataset()
try:
inputDataset = dataSet[self.parameters['targets']]
except KeyError:
missing = [var for var in self.parameters['targets'] if var not in dataSet]
self.raiseAnError(KeyError, "Variables: '{}' missing from dataset '{}'!".format(", ".join(missing),currentInput.name))
self.sampleTag = currentInput.sampleTag
self.raiseAnError(KeyError, "Variables: '{}' missing from dataset '{}'!".format(", ".join(missing),self.inputDataObjectName))
self.sampleTag = utils.first(dataSet.dims)
Jimmy-INL marked this conversation as resolved.
Show resolved Hide resolved

if currentInput.type == 'HistorySet':
if self.dynamic:
dims = inputDataset.sizes.keys()
if self.pivotParameter is None:
if len(dims) > 1:
self.raiseAnError(IOError, self, 'Time-dependent statistics is requested (HistorySet) but no pivotParameter \
got inputted!')
self.raiseAnError(IOError, self, 'Time-dependent statistics is requested (HistorySet) but no pivotParameter \
got inputted!')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"was provided" instead of "got inputted"

elif self.pivotParameter not in dims:
self.raiseAnError(IOError, self, 'Pivot parameter', self.pivotParameter, 'is not the associated index for \
requested variables', ','.join(self.parameters['targets']))
else:
self.dynamic = True
if not currentInput.checkIndexAlignment(indexesToCheck=self.pivotParameter):
self.raiseAnError(IOError, "The data provided by the data objects", currentInput.name, "is not synchronized!")
self.pivotValue = inputDataset[self.pivotParameter].values
if self.pivotValue.size != len(inputDataset.groupby(self.pivotParameter)):
msg = "Duplicated values were identified in pivot parameter, please use the 'HistorySetSync'" + \
" PostProcessor to syncronize your data before running 'BasicStatistics' PostProcessor."
self.raiseAnError(IOError, msg)
self.pivotValue = dataSet[self.pivotParameter].values
if self.pivotValue.size != len(dataSet.groupby(self.pivotParameter)):
msg = "Duplicated values were identified in pivot parameter, please use the 'HistorySetSync'" + \
" PostProcessor to syncronize your data before running 'BasicStatistics' PostProcessor."
self.raiseAnError(IOError, msg)
# extract all required meta data
metaVars = currentInput.getVars('meta')
self.pbPresent = True if 'ProbabilityWeight' in metaVars else False
self.pbPresent = 'ProbabilityWeight' in dataSet
if self.pbPresent:
pbWeights = xr.Dataset()
self.realizationWeight = dataSet[['ProbabilityWeight']]/dataSet[['ProbabilityWeight']].sum()
for target in self.parameters['targets']:
pbName = 'ProbabilityWeight-' + target
if pbName in metaVars:
if pbName in dataSet:
pbWeights[target] = dataSet[pbName]/dataSet[pbName].sum()
elif self.pbPresent:
pbWeights[target] = self.realizationWeight['ProbabilityWeight']
Expand All @@ -267,6 +222,9 @@ def initialize(self, runInfo, inputs, initDict):
@ In, initDict, dict, dictionary with initialization options
@ Out, None
"""
if len(inputs)>1:
self.raiseAnError(IOError, 'Post-Processor', self.name, 'accepts only one DataObject')
self.inputDataObjectName = inputs[-1].name
#construct a list of all the parameters that have requested values into self.allUsedParams
self.allUsedParams = set()
for metricName in self.scalarVals + self.vectorVals:
Expand All @@ -284,6 +242,8 @@ def initialize(self, runInfo, inputs, initDict):
inputObj = inputs[-1] if type(inputs) == list else inputs
if inputObj.type == 'HistorySet':
self.dynamic = True
if not inputObj.checkIndexAlignment(indexesToCheck=self.pivotParameter):
self.raiseAnError(IOError, "The data provided by the input data object is not synchronized!")
inputMetaKeys = []
outputMetaKeys = []
for metric, infos in self.toDo.items():
Expand Down Expand Up @@ -1544,6 +1504,21 @@ def spearmanCorrelation(self, featVars, targVars, featSamples, targSamples, pbWe
da = xr.DataArray(spearmanMat, dims=('targets','features'), coords={'targets':targVars,'features':featVars})
return da

def _runLegacy(self, inputIn):
"""
This method executes the postprocessor action with the old data format. In this case, it computes all the requested statistical FOMs
@ In, inputIn, object, object contained the data to process. (inputToInternal output)
@ Out, outputSet, xarray.Dataset or dictionary, dataset or dictionary containing the results
"""
if type(inputIn).__name__ == 'PointSet':
merged = inputIn.asDataset()
elif 'metadata' in inputIn:
merged = xr.merge([inputIn['metadata'],inputIn['targets']])
else:
merged = xr.merge([inputIn['targets']])
newInputIn = {'Data':[[None,None,merged]]}
return self.run(newInputIn)

def run(self, inputIn):
"""
This method executes the postprocessor action. In this case, it computes all the requested statistical FOMs
Expand Down
1 change: 1 addition & 0 deletions ravenframework/Models/PostProcessors/Factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@
from .EconomicRatio import EconomicRatio
from .ValidationBase import ValidationBase
from .Validations import Probabilistic
from .Validations import Representativity
from .Validations import PPDSS
from .Validations import PhysicsGuidedCoverageMapping
from .TSACharacterizer import TSACharacterizer
Expand Down
Loading