-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Praxis Data Server (https://spineimage.ca
)
#77
Comments
Software DesignGlobal Architecture
Per-Site Architecture
ComponentsIt's easy to spend a lot of money writing software from scratch. I don't think we should do that. I think what we should do is design some customizations of existing software and build packages that deploy them. Data ServersI have two options in mind for the data servers: GIN is itself built on top of pre-existing open source projects: Gogs, git-annex, datalad, git, combined in a customized package. We would take it and further customize it. It is a little more sprawling than NextCloud. Being focused on neuroscience, we could easily upstream customizations we design back to them to help out the broader community. NextCloud is a lot simpler to use than datalad. You can mount it onto Windows, Linux, and macOS as a cloud disk (via WebDAV). It also has a strong company behind it, lots of users, good apps. It's meant for more general use than science; actually, it was never designed for science. It would be harder to share any improvements we make to it; though we could publish our packages and any plugins back to the wider NextCloud ecosystem. It has some other weaknesses too. Uploading/DownloadingWith GIN, uploading can be done with
Downloading is replacing the first two lines with Windows and macOS do not have a git client built in. With NextCloud, uploading can be done with
Downloading is just reversing the arguments to There's also VersioningGIN is based on git, so it has very strong versioning. There are NextCloud only supports weak versioning. But maybe we can write a plugin that improves that. Somehow. We would have to figure out a way to mount an old version of a dataset. PermissionsNextCloud has federated ACLs built in: users on I am unsure what GIN has; since it's based on Gogs, it probably has public/private/protected datasets, all the same controls that Github and Gitlab implement, but I don't think it supports federated ACLs. Federation with GIN might look like everyone having to have one account per site. But maybe we could improve that; perhaps we could patch GIN to supported federated ACLs as well. We would need to study how NextCloud does it, how GIN does it, and see where we can fit them in. SharingIn a federated model, data-sharing is done bidirectionally: humans at each site grant each other access to data sets, one at a time. We should encourage the actual data sharing to happen via mirroring, for the sake of encouraging resiliancy in the network. I think we should encourage the Gitlab supports mirroring; on https://gitlab.com/$user/$repo/-/settings/repository you will find the mirror settings We need to replicate this kind of UI in whatever we deploy. PortalFor the portal, we can probably write most of it using a static site generator like hugo, plus a small bit of code to add (and remove) datasets. The dataset list can be generated either by pushing or pulling: the data servers could notify the portal (this is how I've drawn the diagrams above) or the portal could connect to the data servers in a cronjob to ask them what datasets they have. Generally, the latter is more stable, but the former is more accurate. It should be possible to keep the list of datasets, one per file, in a folder on the portal site, and have PackagingWe should provide an automated installer for each site to deploy the software parts. It should be as automated as possible: no more than 5 manual install steps. We can build the packages in:
I think .deb is the smoothest option here; I have some experience using pacur, and we can use their package server to host the .debs. Whatever we pick, the important thing is that we deliver the software reproducibly and with as little manual customization as possible. I think we should produce two packages:
We might also need to produce uploader scripts; but because Software Development Tasks
Open Questions
SummaryWe can build a federated data system on either GIN or nextcloud. Either one will require some software development to tune it for our use case, but much less than writing a system from scratch. Both support widely supported network protocols which makes them cross-compatible, reliable, and avoids the cost of developing custom clients. |
Cost Estimates
Which works out to about 18 or 19 sites that Praxis can fund this year. |
I'm going to make some demos to make this more concrete. I'm starting with NextCloud. I'm going to deploy 3 NextClouds and configure them with federation sharing. HardwareThe first thing to do is to get some hardware. Vultr.com has cheap VPSes. I bought three of them in Canada (the screenshot covered my datacenter selection; trust me, it's in Canada): Notice I'm choosing Debian, but any Linux option would work. Just gotta wait for them to deploy.... And they're up: NetworkingThe second thing to do is set up DNS so these are live net-wide servers. I went over to my personal DNS server and added Now just gotta wait for that to deploy... and they're up too:
Sysadmin Access ControlI just need to make sure I have access secured. I'm going to do two things: I go to the VPS settings one at a time and grab the root passwords: then I log in, and confirm the system looks about right:
Then I use
And now that that works, I disable root password login, which is a pretty important security baseline:
In a different terminal without disconnecting, in case we need to do repairs, verify this worked by:
I'm also going to add a First, invent a password:
Then make the account:
Test the account:
So this means I have two ways in, the root password is disabled, my own user password is lengthy and secure. Now repeat the same for data2.praxis.kousu.ca and data3.praxis.kousu.ca. Basic configSet system hostname -> already done by Vultr, thanks Vultr (and repeat for each of the three) updates
(and repeat for each of the three)
|
mailer needs: |
la la laAlternatives:
|
I'm not sure I understand that. For example, in the context of NeuroPoly's internal data (which are currently versioned/distributed with git-annex), would it be considered "one user sharing a dataset"? And if so, would ZFS be limited for this specific use-case? |
My (weak) understanding is that with zfs you have to do:
So actually, yes, a single zfs instance can be shared with multiple users, so long as everyone has a) direct Alternately, an admin (i.e. you or me or Alex) could ssh in, mount a snapshot, and expose it more safely to users over afp://, smb://, nfs://, sshfs://. But then users need to be constantly coordinating with their sysadmins. Maybe that's okay for slow-moving datasets like the envisioned Praxis system but it would be pretty awkward for daily use here. |
Although there is this: on Linux, you can do 'checkout' without mounting like this:
You still need |
It looks like we are going towards a centralized solution. In brief:
Question is: where to host this centralized server
|
It's Demo day: We have been allocated an account on https://arbutus.cloud.computecanada.ca/. Docs at https://docs.computecanada.ca/wiki/Cloud_Quick_Start. I'm going to install GIN on it: https://gin.g-node.org/G-Node/Info/wiki/In+House Let's see how fast I can do this. |
hm okay what's wrong? Okay docs say I should be using the username "ubuntu". That doesn't work either. |
It seems like it just hung? I deleted and remade the instance with Name = praxx I still can't get in though:
Oh I see what the problem is:
drat. But I...added my rsa key? And it's still not working?
hm. The system log (which openstack will show you) says
so, hm. Why? Oh I missed this step: Key Pair: From the Available list, select the SSH key pair you created earlier by clicking the upwards arrow on the far right of its row. If you do not have a key pair, you can create or import one from this window using the buttons at the top of the window (please see above). For more detailed information on managing and using key pairs see SSH Keys. |
Delete and recreate with
It only allows you to init with a single keypair! Ah. |
Got in:
|
test: root@
ubuntu@
Test:
|
Start following https://gin.g-node.org/G-Node/Info/wiki/In+House:
the ports are listening:
(I don't have a key inside of GIN yet, so of course this fails, but it's listening)
I filled out the options like this:
Verify:
Change the hostname to match:
7.[ ] TLS Actually I have this already, I can just copy the config out of https://github.com/neuropoly/computers/tree/master/ansible/roles/neuropoly-tls-server
nginx config:
dehydrated config:
Verify:
One thing not in ansible is the gogs reverse proxy part:
NOTE: I disabled ssl in '/etc/nginx/sites-enabled/acme' because it was conflicting with gogs?? I don't know what's up with that. Gotta think through that more. Maybe ansible needs another patch. It's working now:
And check the user's view (notice the TLS icon is there)
|
Lets see if I can mirror our public dataset. First, download it to my laptop (but not all of it, it's still pretty large; I ^C'd out of it):
Okay, now, make a repo on the new server: https://data1.praxis.kousu.ca/repo/create -> Oh here's a bug; drat; I wonder if I can change the hostname gogs knows for itself, or if I need to rebuild it: But if I swap in the right URL, and deal with git-annex being awkward, it works:
|
the port 2222 problemUsing a non-standard ssh port is a problem. I know of five solutions:
or
Each user uses this each time they use the server.
Each users adds this one-time to each new machine, at the same time as they provide their ssh key.
ComputeCanada lets us allocate multiple IP addresses per machine. The inscription form asks if you want 1 or 2. If we had a second IP address, we could bind one of them to the underlying OS and the other to GIN. Here's someone claiming to do this with Gitlab+docker: https://serverfault.com/a/951985
Just swap the two ports:
Then the sysadmins need to know to use
when they need to log in to fix something. That will hopefully be pretty rare, though. They could even do this:
And users don't need to do anything special.
The docker image comes with a built in ssh server. If we install GIN on the base system and share the system ssh there won't be a second port to worry about. This is more work because it requires rebuilding their package in a non-docker way. It's my preference though. I would like to build a .deb so you can "apt-get install gin" and have everything Just Work. We could also make this package deploy
|
Demo day went well by the way. https://praxisinstitute.org/ seems happy to investigate this direction. |
Shortcuts taken that should be corrected:
All of these could be fixed quick by bringing this server under ansible, but I wrote into the ansible scripts the assumption that all servers are under *.neuro.polymtl.ca, so I'd need to fix that first. Also
|
I've been working on adding this to the lab's configuration management today (for those who have access, that's at https://github.com/neuropoly/computers/pull/227). To that end, I'm re-purposing the resources allocated to praxis-gin to be for vaughan-test.neuropoly.org, which will be our dev server for data.neuropoly.org. |
And https://data.neuropoly.org will be just as good a demo for Praxis the next time we talk with them. And with the ansible work you're doing it will even be reproducible for them to build their own https://data.praxisinstitute.org :) |
Some competitors: Possible collaborators: Portals (where we could potentially get ourselves listed, especially if we help them out by making sure we have APIs available): |
Part of our promise of standards-compliant security was to run |
We had a meeting with Praxis today:
Some other related work that came up:
|
@taowa is going to contact ComputeCanada asking them to extend our allocation on https://docs.computecanada.ca/wiki/Cloud_resources#Arbutus_cloud_.28arbutus.cloud.computecanada.ca.29 from 2 VPSes to 3 -- one for data-test.neuropoly.org, one for data.neuropoly.org, and one for data.praxisinstitute.org. |
We've done a lot of work in our private repo at https://github.com/neuropoly/computers/issues/167 (the praxis-specific part is at https://github.com/neuropoly/computers/pull/332) on this. We've got an ansible deployment, a fork of gitea (neuropoly/gitea#1); and have a demo server at https://data.praxisinstitute.org.dev.neuropoly.org/. Eventually we will want to extract those ansible scripts and publish them on Galaxy Today we talked to Praxis and David Cadotte again and got an update on how their data negotiations are going:
Each site is very different and needs help adapting to their environment; they have different PACS, different OSes, different levels of familiarity with the command line. David has been spending time giving tech support to some of the sites' curators to help get their data in BIDS format. We have created a wiki here to gather the information David has been teaching and anything we learn during our trial dataset uploads; it's here on GitHub but could migrated to https://data.praxisinstitute.org, once that's live (and eventually perhaps these docs could even be rolled into the ansible deployment, as a standard part of Neurogitea?). We will be in touch with Praxis's IT team in the next couple weeks so we can migrate https://data.praxisinstitute.org.dev.neuropoly.org -> https://data.praxisinstitute.org. |
We got some branding feedback from Praxis Institute for the soon-to-be https://data.praxisinstitute.org:
(Note that the current demo simply uses the
|
On my end, I emailed R. Foley at Praxis to ask to get DNS reassigned to our existing ComputeCanada instances, so that we will have https://data.praxisinstitute.org in place of https://data.praxisinstitute.org.dev.neuropoly.org. EDIT: R. Foley got back to say that they don't want to give us praxisinstitute.org, but will talk to their marketing team and decide on an appropriate domain we can have. |
On the demo server I just saw this probe from Mongolia:
so it occurs to me that maybe we should impose geoblocking on Praxis's server. It's meant to be a pan-Canada project, so maybe we should impose firewall rules that actually enforce that it's only pan-Canadian. That's a bit tricky to do; I guess we can extract ip blocks from MaxMind's |
theme-praxis.css:
Looks like: I went one step slightly further than the default themes and themed the yes/no buttons as blue/yellow (matching colours I got off https://praxisinstitute.org/) instead of the default green/red. I'll integrate this into ansible this afternoon. After that I'll replace the logos. |
I followed the instructions:
|
For this, I put
And did this patch to what I had above: diff --git a/custom/public/css/theme-praxis.css b/custom/public/css/theme-praxis.css
index a1665744f..d8cf3faea 100644
--- a/custom/public/css/theme-praxis.css
+++ b/custom/public/css/theme-praxis.css
@@ -47,3 +47,9 @@
.ui.red.buttons .button, .ui.red.button:hover {
background-color: var(--color-warning-bg-dark-1);
}
+
+/* the neurogitea wordmark needs some CSS resets to display properly */
+.ui.header > img.logo {
+ max-width: none;
+ width: 500px;
+}
diff --git a/custom/templates/home.tmpl b/custom/templates/home.tmpl
index d7d1d8501..2aaaa176b 100644
--- a/custom/templates/home.tmpl
+++ b/custom/templates/home.tmpl
@@ -7,7 +7,7 @@
</div>
<div class="hero">
<h1 class="ui icon header title">
- {{AppName}}
+ <img class="logo" src="{{AssetUrlPrefix}}/img/neurogitea-wordmark.svg"/>
</h1>
<h2>{{.i18n.Tr "startpage.app_desc"}}</h2>
</div> And now I've got |
Some other praxis-specific things to include:
|
Theming is sitting on https://github.com/neuropoly/computers/pull/332/commits/b365da08c69c67509bbcdcbffe3348cda521cfd0 (sorry it's in the private repo; extracting and publishing to Ansible Galaxy will be a Real-Soon-Now goal) |
They've made a decision: spineimage.ca. I've asked them to assign
When that's done, I'll add those domains in https://github.com/neuropoly/computers/pull/332; and then we should maybe think about pointing data.praxisinstitute.org.dev.neuropoly.org back at some servers on Amazon again to host a staging server we can use without knocking out their prod server. |
Meeting - site_03We had a meeting today with Praxis, including the first trial data curator. David Cadotte had helped her already curate the dataset into BIDS. We successfully uploaded it to https://spineimage.ca/TOH/site_03. User DocumentationDavid Cadotte has a draft curator tutorial 🔏 . I started the same document on the wiki here but his is further along. The next step is that David, me and the trial curator are going to upload a trial dataset to https://data.praxisinstitute.org.dev.neuropoly.org/ together. We will be picking a meeting time via Doodle soon. The curator has been using bids-validator, but it sounded like they were using the python version, not the javascript one. The javascript one is incomplete but the python version is even more incomplete. This is something I should check on when we get together to upload the dataset. Prod Plan: https://spinalimage.caIn parallel, we will finish up migrating to spineimage.ca, the "prod" site, and sometime next month we should have 4 - 5 curators ready. Future dev plan: https://spineimage.ca.dev.neuropoly.orgWe'll have to repurpose the existing VMs to become prod. But I would like to keep the staging site so we can have something to experiment on. I could experiment locally but I don't have an easy way to turn off or mock https, so it's simpler just to have a mock server with a real cert from LetsEncrypt. I'll rename it spinalimage.ca.dev.neuropoly.org But there's a problem: ComputeCanada gave has given us three VMs but only two public IPs; the current version of neurogitea needs two IPs. Some ideas:
|
https://spineimage.ca
)
Summary for today: we were able to connect with Lisa J. from the site_012, and had some pretty good success.
|
Also, we noticed that for site_03, the |
Meeting - site_012Lisa didn't yet have her dataset curated. We got halfway through curation, and did not start at all on uploading. Also like @mguaypaq said, we were figuring out Windows remotely as we went, never having used much of this software stack ourselves there. We didn't have Administrator on the computer she was working on. The
By the way, we skipped using pycharm or a virtualenv, and that worked fine. Our curators are not often developers, so explaining virtualenvs is a whole extra can of worms that derails the training. |
BackupsPut backups on https://docs.computecanada.ca/wiki/Arbutus_Object_Storage. Even if backups are encrypted, the data sharing agreement says backups need to stay within the cluster. |
Meeting - site_012Today we had a meeting for 2 hours.
|
@mguaypaq has helped me merge in HTTP downloads, which means we now can host open access datasets on any server we choose, in any country we can find one. I've tested it by moving a complete copy of https://github.com/spine-generic/data-single-subject onto https://data.dev.neuropoly.org/: https://data.dev.neuropoly.org/nick.guenther/spine-generic-single/. UploadingFirst, download the current copy:
Then upload:
This DigitalOcean server is crazy fast by the way: that's 293.66Mb/s upload. Downloading from Amazon was still pretty fast but it was "only" 100Mb/s. And the DigitalOcean server is in Toronto but the Amazon servers are (supposed to be) in Montreal. I went to its settings to make it Public: then I downloaded anonymously, with the same commands we tell people to currently use against the GitHub copy: Download
Well, maybe I take back what I said: DigitalOcean's download was almost identical to Amazon's. Funny that the uplink was faster, usually it's the other way around. Also to note: because Push-to-Create is turned on in our gitea config, there was very little fumbling around. A single To emphasize again: we now have a fully alternate copy of https://github.com/spine-generic/data-single-subject. Identical download instructions work for it; all someone has to do is swap in https://data.dev.neuropoly.org/nick.guenther/spine-generic-single/. 🎉 (fair warning though: this is a dev server). And with this arrangement, all bandwidth -- git and git-annex -- is paid for through DigitalOcean, instead of splitting the bill between GitHub and Amazon. And when we do promote this to a production server, we can get rid of the difficult contributor AWS credentials. Unfortunately I've already found one bug that we missed in neuropoly/gitea#19 but it's minor. |
I'm working on this now. That wiki page is helpful but I still have to fill in some details, which I am writing down here:
At this point, we only need to keep the restic config. We can toss the OpenStack RC file; if we need it again, we can grab it again. Now how to interface Gitea with restic? According to https://docs.gitea.io/en-us/backup-and-restore, backing up Gitea is the standard webapp process: dump the database and save the data folder,. It has a
and I indeed ran into this while experimenting a few months ago: you cannot restore a Also, restoring is a manual process:
So I'm going to ignore LimitationsFrom the docs:
I can make tokens for everyone who will be adminning this server, but as far as ComputeCanada is concerned, all of them are me. I don't know how to handle this. Maybe when I eventually hand this off to someone else we'll have to copy all the backups to a new bucket. AlternativesAccording to restic, it can talk to OpenStack directly, without using the S3 protocol. But I got it working through S3 and I think that's fine. |
BackupsMy existing backup scripts on
That's for Gitolite. Porting this to Gitea is tricky because of the required downtime. This requirement seems to complicate everything because I want backups to run as I'm tempted to ignore this requirement. I did some experiments and found that ExperimentPeek.2022-12-05.19-10.webmI'm going to compromise:
This way, while the database and contents of data/ may drift a little apart, the worst that will happen is there are some commits in some repos that are newer than the database, or there are some avatars or other attachments that the database doesn't know about. AnsibleI'm working on coding this up in https://github.com/neuropoly/computers/pull/434 EDIT: in that PR, I decided to ignore backup consistency. I think it will be okay. Only a very busy server would have problems anyway, which our servers will definitely not be, and I'm not even convinced it's that big a problem if the git repos and avatars are slightly out of sync with the database. And Gitea already has code to handle resyncing at least some cases, because digital entropy can always cause mistakes. I think at worst, it may fall back to using an older avatar for one person. |
I merged neurogitea backups today and wanted to use them for this prod server. But first I had to Ubuntu UpgradeI used There was a snag: the upgrade killed postgres-12 and replaced it with postgres-14; it sent me an email warning me to run RestoreLuckily, I had backups from December (taken above). I did
unfortunately I didn't take the backup with
and reloaded again:
Gitea UpgradeThe redeploy above took care of upgrading Gitea, which went smoothly. It is now at 1.18.3+git-annex-cornerstone and it does the inline previews (but I can't demo that here because it's private data). |
I just noticed that ComputeCanada is affiliated with https://www.frdr-dfdr.ca/, sponsored by the federal government. They use Globus to upload data where we use git-annex. Should we consider recommending that instead of git? They don't seem to do access control:
which rules them out for our use case. Also Globus doesn't do versioning, I'm pretty sure. For example, just look through https://www.frdr-dfdr.ca/discover/html/repository-list.html?lang=en and find, say, https://www.frdr-dfdr.ca/repo/dataset/6ede1dc2-149b-41a4-9083-a34165cb2537 and that doesn't show anything labelled "versions" as far as I can see. |
Meeting - site_012Today our site 12 curator was able to finish a dcm2bids config file and curate all subjects from her site, with David's advice. We included several t2starw images, that initially David thought we should drop until we realized they made up a large portion of Lisa's dataset. One subject was dropped -- the previous sub-hal001 -- and the others subject IDs renumbered to start counting at sub-hal001. There was also some debate about whether to tag sagittal scans with acq-sag or acq-sagittal. At poly we've used acq-sagittal but Ottawa's dataset uses acq-sag. At neuropoly we will standardize on this internally to match. Right now curators have to manually run dcm2bids for each subject. @valosekj and @mguaypaq and I think it should be possible to write the loop for this, and write that into a script in each dataset's code/ folder. That would make curation more robust. @valosekj pointed out we can store the imaging IDs (the IDs used for each dataset's We have not yet run bids-validator on the dataset. We spent a while trying to get the curator able to upload to spineimage.ca. I hoped we could start by committing and publishing the config file, and then adding subject data in steps, and refining the config file with more commits from there. I hoped it would be quick but we hit bugs and ran out of time for today. We double-checked by using a few different servers and ports and also an PuTTY, an entirely separate program that should have nothing to do with the ssh that comes with Windows Git Bash In all cases a closed port times out, an open port gives this error, after a successful TCP handshake. I remember the connection working back in July when we first were in touch with this site, but since then their hospital IT has done upgrades and now there seems to be some sort of firewall in the way. I will follow up with the curator by email with specific debugging instructions she can relay to her IT department. |
Meeting - site_012Today, because Halifax's IT department is apparently backlogged by months, we debugged further but without success. We created a short python script that basically just implements As a workaround, we constructed a .tar.gz of the dataset and encrypted it using these tools https://stackoverflow.com/a/16056298/2898673. We resulted in a 1GB encrypted file, site_012.tar.gz.enc that's on Lisa's office machine. Mathieu and I have the decryption key saved securely on our machines. The curator is going to try to use a file hosting service NSHealth runs. If that fails, it might be possible to create an Issue in https://spineimage.ca/NSHA/site_012/issues and drag-and-drop the file onto it (so it becomes an attachment there), and in the worst case, it can be mailed on a thumbdrive to us at Julien Cohen-Adad (ref: https://neuro.polymtl.ca/contact-us.html) In the future, perhaps as a different workaround we can set up an HTTPS proxy; if the problem is a deep-packet inspecting firewall, wrapping the ssh connection in an HTTPS one should defeat it. I believe these instructions solve that https://stackoverflow.com/a/23616021/2898673 and we can pursue that in the future if/when we need to do this again. EDIT: the curator created an account for us on https://sfts1.gov.ns.ca/ and sent us the encrypted file, and @ mguaypaq was able to download it and upload the contents to https://spineimage.ca. On Thursday, May the 18th, there was a final meeting to demonstrate to the Halifax curator that their work was done as far as it can be for now. If we need their help making edits to the data we will be right back here unless we figure out some kind of proxy situation. |
Backups ("offsite")Right now we only have two backups: one on spineimage.ca:/var, which doesn't really count as a good backup, and the one I created above, which, like the server, is on Arbutus in Victoria, and therefore one natural disaster could wipe out all the data. Moreover, there are not very many key holders -- just me at the moment -- and the data is stored inside of an OpenStack project owned by @jcohenadad, all of which makes neuropoly a single point of failure. We should have other physical locations, to protect against natural disasters; the data sharing agreement requires us to stick to ComputeCanada as a line of defense against leaks, but since recently most of their clusters run OpenStack so we can choose a different physical location than arbutus. We should also have other keyholders, ones who do not work for neuropoly so Praxis doesn't risk losing the data if we mess up or are attacked and get our accounts locked or wiped. Towards all this I have been asking Praxis for help and they have found a keyholder. This person has been granted a separate ComputeCanada account and is ready to take on keyholding. They are apparently comfortable with the command line but don't have a lot of time to be involved, but they can hold the keys and, hopefully, bootstrap disaster recovery when needed. Requesting Cloud ProjectsIn February, I emailed tech support, because despite seeing the list of alternate clouds, the sign up form doesn't provide a way to request one. They were extremely helpful about this:
They also added
So I don't expect any problems requesting storage for backups from them. It sounds like they are familiar with and use restic all the time. Lack of Existing KeyholdersI realized that for the existing backups, there is only one
I am going to add s3+
I've done this by running
for each person. I have the notes saved on /tmp and will be distributing them as securely as I can. |
Backup Keyholder OnboardingOn Wednesday the 24th we are going to have a meeting with Praxis's nominee where we:
|
Disk Latency ProblemI just came up against this after rebooting:
i.e. /srv/gitea wasn't mounted, so gitea wasn't running. Can we make this more reliable somehow? After a second reboot, it came up fine. So I don't know, maybe it was a fluke. |
https://praxisinstitute.org wants to fund a Canada-wide spine scan sharing platform.
They were considering paying OBI as a vendor, and having them set up a neuroimaging repository. But had doubts about the quality of that solution and have looked around for others, and have landed on asking us for help.
We've proposed a federated data sharing plan and they are interested in pursuing this line.
Needs
The text was updated successfully, but these errors were encountered: