I don't know what the "official" word is, but here are some thoughts from a hardware design perspective. They may not be good reasons, and they may not even necessarily be correct.
1) There are requirements on how much power we're allowed to draw from a PXI slot (I'm unsure how it compares to power from a PCI slot, but I would assume it is more because some PCI/PCIe cards require an extra power plug that their PXI cousins do not). Additionally, there are requirements on the chassis as to how much power they *must* be able provide to us.
This means that when you drop $10k for a brand spankin new top-of-the-line oscope, you know that you wont have to worry about whether or not the overburdened power supply in your dusty 5 year old dev machine can handle it.
2) There are some requirements as to how much cooling each slot in a PXI chassis will get, whereas PCI slots are more of a crapshoot. It's possible to design a computer tower to have a rediculous amount of cooling for your PCI slots, but more likely you have an 80mm fan or two nowhere near your cards.
Some devices, like high resolution DMMs, are heavily affected by the ambient temperature.
3) I'm pretty sure PXI is slightly wider, allowing for more space for components and such.
4) You can buy an 18 slot PXI chassis. Just try and find a motherboard with 18 PCI slots
5) RTSI cables are great for synchronization, but I believe there is a limit (due to signal integrity) of how many devices you can chain together. I'm also pretty sure that limit is less than 18.
I'm sure there are more reasons than this, but these came to mind. That said, I currently own two NI PCIe cards and I love them to bits.
Hugs,
memoryleak
Full disclosure: I'm sitting at my desk on the 6th floor of NI building C right now.
P.S. surely there is a way to tag myself as being someone who is "drinking the koolaid"