Good servers do not need periodic SMIs, stuff like temperature sensors are handled by the BMC. Of course it depends on the benches and it can be a pain to find them, but people using the Linux realtime patch for example really need servers that do but abuse SMIs.
Periodic SMIs in theory should only be used for stupid stuff like emulating a PS/2 keyboard and mouse from a USB one, which are safe to disable. SMIs are used to handle machine checks, access to persistent storage (including UEFI variables), and various RAS (reliability/availability/serviceability) features.
Well, I don't know for a fact everything those interrupts do. What I know is that I had to turn those off (along with bunch of other things) to meet strict realtime guarantees for my proof of concept algorihmic trading framework I did for a brokerage house. This was few years back on Haswell on best hardware you could buy including top bin Xeons dedicated for algotrading that were clocking 5GHz by default (frequency locked, sleep states turned off, etc).
On the other hand I never noticed anything funky with regards to memory latency that would cause me to investigate. This is probably because I would already treat memory like a remote database and try to do as much as possible within L2 and L3.
The budget for entire transaction (from the moment bytes reached NIC to the moment bytes left NIC as measured by external switch) was 5us so 0.1us was below noise threshold.
Periodic SMIs in theory should only be used for stupid stuff like emulating a PS/2 keyboard and mouse from a USB one, which are safe to disable. SMIs are used to handle machine checks, access to persistent storage (including UEFI variables), and various RAS (reliability/availability/serviceability) features.