Dear Customers,
We're currently experiencing network interruptions on our switching-layer at our site Tuxis-1.
We are investigating the issue, and will report more information when it becomes available.
All machines, VPS-es & colocation machines at this location are currently suspected to be impacted by interruptions.
We will confirm this once more information is known.
Status updates will be posted below:
13:39 - This issue has been reopened, we're experiencing issues again.
We're working to fix the issue, nodes which are located in site Tuxis-1 may experience interruptions
13:55: We are still investigating, there is no current update regardig the stability of the network.
14:10: There seem to be issue on the level of OSPF. We have configured some static routes to minimize the impact. We will keep investigating the root cause.
14:25: Setting static routes seems to fix the issue for now. Since committing the change there have been no further service interruptions.
Our network team is working together with vendor support to figure out what is causing this behavior.
15:00: Still investigating, no update.
15:36: We have identified the issue is loss of OSPF sessions. OSPF is a protocol that makes sure that all switches can reach each other on their IP address, which is a neccessaty for VXLAN. We have now configured static routes on the switches on Tuxis-1, which is where these issues seem to be present since the upgrade of yesterday. This causes a stable connection and should prevent loss of connectivity. This is a workaround, not a fix. We are still investigating the issue with our supplier and will keep this issue open as long as the issue is not fixed.
Saterday, Februari 28'th 2026
09:08: The supplier has identified a possible cause and we will update configuration later today to improve logging to debug the possible cause. The site has been stable since the latest update at 15:36 yesterday.
Sunday, March 1'st 2026
10:19: Last night, we've seen a single flap that caused some of the required debuglogging. We've updated the vendor so they can further investigate this issue.
Monday, March 2'nd 2026
09:40: Around 9:28 there has been a flap that may have caused short interruptions in a single rack. This has given more debugging information that we are currently sending to our supplier. We have also changed a configuration that may improve the current situation. We will keep you updated.
11:43: Around 11:38 there was another short interruption for one of our racks located in Tuxis-1. Our supplier is looking into the debug information we have supplied.
15:24: Between 15:15:56 and 15:16:06 there was another short interruption for one of our racks located in Tuxis-1. We have supplied extra debug information to our vendor.
Tuesday, March 3'rd 2026
11:57: The vendor has found a possible cause and is working on confirming the issue. We are awaiting further instructions on how to proceed and how we can minimize the impact of any changes we need to make.
12:40: Between 12:16:42 and 12:33:07 we noticed OSPF flaps between our spine and one specific leaf at site Tuxis-1. We will update our vendor with the specifics of this error.
16:52: We have just had a meeting with the vendor. They can confirm and reproduce that they have found the issue we are experiencing. They can also explain why, even though we have created specific configuration to minimize the impact, we are still seeing impact. The issue we are experiencing has been fixed in a newer software version, that has been battletested by other customers of the vendor.
We will now update the configuration on Tuxis-1 switches again to prevent further impact of this issue. We will then keep monitoring the situation. Tomorrow at 10.00 CET, we will decide on upgrading the next batch of switches, as announced earlier to the version of the software containing the fix. This decision is pending the answer of one remaining question for the vendor.