We will be performing scheduled maintenance on our billing and support systems on Saturday 16th June at 20:00BST, there will be a downtime period of approx 30 – 45 minutes.
xn9 appears to have gone unresponsive, we are currently waiting on remote hands to check the server.
Update 1: It appears the underlying partition of the LVM volume group containing VPS filesystems has gone away, despite the RAID controller, drives and array being healthy. We are currently investigating recovery opportunities.
Update 2: We have been able to manually reconstruct the underlying partition and LVM metadata, however after several attempts we are unable to get it assembled in such a way that VM filesystems are accessible. The root cause of why the partition disappeared is unclear, we suspect the size of the volume may have changed due to a bug/defect within the raid controller. It is possible that with further examination we may be able to recover complete or partial data, we cannot make any guarantees, at this time no data is available. If there is any data which is of particular importance and you can provide the complete filename, we will do our best to recover it through some alternate methods.
We will now begin a recovery operation to re-create VPS based on XN9, onto alternative host machines. Managed servers will include restoration of backups where available.
We sincerely apologise for this inconvenience and will continue to work with our customers to restore service as quickly as possible.
Update 3: All VPS have been migrated to alternate hardware.
One of our Mail Servers requires a reboot for a maintenance issue and will be completed at the below date and time:
Date: 15th October 2017
Time: 21:30 GMT
Updates will be posted here if required.
Update: This has now been completed.
Starting at approximately 22:20 GMT+1, xn10 was experiencing high packet loss. Due to limited access to the server we gracefully shutdown all VM’s and brought them back up to make some configuration changes. Apologies for the reboot.
We have now identified one VM is the destination of a low bandwidth, high concurrency DoS attack which has now been null routed and we continue to monitor.
We are aware of an issue with Web1 and are investigating.
We are aware of an issue with our XN1 node, we have identified the issue and are working to resolve it as quickly as possible.
Updates will be provided here.
Update @ 16:52: DC Staff are running slow, frustrating but nothing we can do unfortunately to speed this up.
Update: This is now resolved.
xn3.pcsmarthosting.co.uk currently has issues with the RAID controller which we are working to resolve. Further updates will follow.
Update: We’ve traced this to a bug in the upstream Linux 4.9 kernel. The raid controller and array appear to be healthy. We’ll bring the server back up on a secondary kernel to restore service. Further investigations will be carried out in our test environment.
Update @ 21:55 – This has now begun.
Update @ 22:28 – This has now been completed and VPS are now booting back up
This is an advanced notification of upcoming maintenance on hostnode XN17, please see below for further details.
When Is this happening?
It will begin on Friday, 27th January 2017 at 22:00 GMT
How long will it take?
We are hoping this should take no longer than 60 minutes.
Is any Downtime expected?
Yes, the server will need to be powered down so downtime is to be expected for the duration.
Updates will be posted here.
We have been made aware of an issue with XN10 by monitoring and are currently investigating the issue, further updates will be provided here when we have them
Update: This has now been resolved.
Our web1 server has currently gone into a read only state and our technicians are currently investigating this.
Updates will be provided here when they’re available.
Update: Onsite staff are currently hooking up a crash cart to this machine.
Update: It appears a hard disk in the RAID10 array has failed, and caused the controller to hang. This is rare but it can happen. There are some filesystem inconsistencies we are working to repair then the server will be brought back online.
Update: Damage has been repaired and didn’t look too bad, just doing a 2nd pass now to be absolutely sure and will then boot.
Update: Server is now back online and the RAID array is rebuilding.