I strongly suggest you get a dedicated controller for the mission-critical tasks (think cFP or cRIO) and handle the non-mission critical tasks on the PC (like displaying data, logging to a database or wherever. This means that if the PC goes offline for some reason, the controller continues humming away at what it does best: determinisitc control. Then, you only really need to worry about infrastrucute support (eg: UPS) for the controller - it can continue to control and save data locally while the other system is offline. Also, if you go to the PC to get data off it (I don't expect you're going to run the system completely untouched for a year, right? Not even look at any of the data), doing so won't interupt the process.
This is a common option in mission critical systems: we built a similar system a few years ago with 3 parallel controllers (PXI) that could take over from each other within 1ms of a detected failure (those specs are probably overkill for your application, but the technology remains the same). We achevied this using reflected memory (a PXI card with a fiber optic link between them, that all "shared" the same memory) - this worked really well. Another option is to stream reflective memory over a local LAN dedicated to the controllers.
I've never been to Amsterdam...
The answer to that question depends on just how important that determinism is. If you trust your engineers to make something that won't fail, then maybe do it alone. That said, if they misplace one bit, and the whole thing comes crashing down in the last month of the experiment, you might be cranky An even worse scenario (which I've seem many times) is when it *looks* like everything worked fine, but there's an offset or skew in your data that you don't find out about until you've published - *that* would be a nightmare!