AMS       DCO2       RBX      
ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMN
OPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZAB
CDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOP
QRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCD
EFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQR
STUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEF
GHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRST
UVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGH
IJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUV
WXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJ
KLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWX
YZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKL
MNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ
ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMN
OPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZAB
CDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOC

› 2018-12-12 - 22:10:18 [ VIEW ]

Issue

Network issue, checking

Followup

• 2018-12-12 22:12:43 : Our network provider has an issue, contacting them
• 2018-12-12 22:15:07 : Checking how to solve
• 2018-12-12 22:15:22 : This is a global network issue, working on it (dco), more info will follow
• 2018-12-12 22:55:29 : Online, telenet uplink was working for telenet connections, more info will follow once we receive more information
• 2018-12-12 23:18:08 : Public information of our provider posted, will get more information tomorrow and how to prevent. Message : #DCOSTATUS issues with bgp uplink, after intervention outgoing routes are restored

Closed

 

› 2018-10-25 - 18:47:51 [ VIEW ]

Issue

Network issue, checking

Followup

• 2018-10-25 18:49:56 : Incomming attack, mitigated
• 2018-10-25 18:53:23 : Attack still going on, but mitigated

Closed

 

› 2018-10-04 - 20:00:00 [ VIEW ]

Issue

Planned intervention: switch replacement in all racks with new ones. Link will be disconnected for some seconds for the shared colo and dedicated servers. You may experience one or two times a small disconnects between 20h and 21h. We will also test a failover on the new switches. This is a followup on the last network issue and fix to avoid this in the future, also this way we could upgrade the internal network to provider higher bandwidth and 10Gig links to the customers.

Followup

• 2018-10-04 19:50:18 : Intervention started in rack B26
• 2018-10-04 19:52:20 : B26 done, monitoring
• 2018-10-04 19:55:20 : Intervention started in rack B24 sw01 an sw02
• 2018-10-04 20:03:21 : B24 done, monitoring
• 2018-10-04 20:10:37 : Intervention started in rack B18 sw01 and sw02
• 2018-10-04 20:14:31 : B18 done, monitoring
• 2018-10-04 20:19:18 : Intervention started in rack B15 sw01 and sw02
• 2018-10-04 20:31:08 : B15 done, monitoring
• 2018-10-04 20:37:37 : Intervention started in rack B06 sw01 and sw02
• 2018-10-04 20:43:04 : B06 done, monitoring
• 2018-10-04 20:45:09 : Intervention started in rack B07 sw01 and sw02
• 2018-10-04 20:56:53 : B07 done, monitoring
• 2018-10-04 21:00:08 : Intervention started in rack D06 sw01 and sw02
• 2018-10-04 21:02:47 : D06 done, montoring
• 2018-10-04 21:02:56 : All work done, traffic levels normal, double checking every connection before leaving.

Closed

 

› 2018-10-02 - 16:10:39 [ VIEW ]

Issue

web02 hosting server out of memory, checking

Followup

• 2018-10-02 16:38:59 : Not booting due to kernel issue
• 2018-10-02 17:22:46 : Restoring from snapshot
• 2018-10-02 19:53:54 : XEN02 emergency reboot for global kernel issue, adding patch
• 2018-10-02 19:59:00 : some vm's not booting due to kernel issues, but fixing
• 2018-10-02 20:10:50 : All kernels fixed, online

Closed

 

› 2018-08-21 - 19:11:32 [ VIEW ]

Issue

Issue checking

Followup

• 2018-08-21 19:14:45 : Issue solved info will follow
• 2018-08-21 19:27:32 : We had a core switching failure, this is too late resolved by us.The phone and notifications are send to 2 people, one on the plane and the other had no reception due to a phone issue. We sincerely apologize for this and will solve this by adding a third tailback and replacing the hardware. Not whole our network was down but the non redundant access ports.
• 2018-08-21 20:37:46 : What have we done: Contacted the manufacturer with debug files and downgraded a version. Also we've added 2 extra people to our high critical support team for these kind of issues. Tomorrow we will infor them how to solve these issues and monitor this. This month there will be no traffic counted at all. We're sorry for the downtime and will keep working on our service.
• 2018-09-03 21:50:47 : There is a fix for this issue, will be implemented this night at 23:30:00 a core switching reboot is needed (max 2 minutes)

Closed

 

› 2018-08-06 - 16:11:27 [ VIEW ]

Issue

Network issue, checking

Followup

• 2018-08-06 16:16:04 : Restored (one of the distribution switches locked, checking why)

Closed

 

› 2018-07-07 - 23:03:58 [ VIEW ]

Issue

Network issue, checking

Followup

• 2018-07-07 23:09:01 : Big incomming attack, try to mitigate
• 2018-07-07 23:16:08 : Mitigated on the router, blackhole, now upstream
• 2018-07-07 23:16:40 : Some package loss could happen, but is mitigated mostly , attack still receiving
• 2018-07-07 23:51:49 : Upstream blocked, will take max 10 min to propagate
• 2018-07-07 23:59:55 : All filtered, watching closely

Closed

 

› 2018-06-06 - 10:45:02 [ VIEW ]

Issue

Network issue, checking

Followup

• 2018-06-06 10:49:24 : Internal attack, mitigating it
• 2018-06-06 11:58:51 : Issue back, checking
• 2018-06-06 12:05:03 : This was actually a broadcast storm by a customer, changed settings on the access switch to avoid this.

Closed

 

› 2018-02-26 - 23:30:00 [ VIEW ]

Issue

Planned maintenance: Network core upgrade, maximum 1 min downtime

Followup

• 2018-02-27 11:12:59 : Planned maintenance: We will add uplink capacity Friday morning (10:30k Routing capacity already upgraded.
• 2018-03-02 10:55:16 : All done, now more capacity available for everyone! All 100Mbit links are now upgraded to 1Gbit free of charge. 1Gbit burst speeds are now just 1Gbit !

Closed

 

› 2018-01-29 - 10:57:15 [ VIEW ]

Issue

Network issue with our provider, investigating

Followup

• 2018-01-29 10:58:45 : Upstream issue, notified, they are working on it. Will update once we know more
• 2018-01-29 11:05:11 : info from our network provider "#dcostatus network issue. Asap more info"
• 2018-01-29 11:13:26 : Ping reply, online
• 2018-01-29 11:14:01 : Traffic back normal, more info will follow why
• 2018-01-29 11:20:33 : Info from our provider "#DCOSTATUS power issue at core switching, two feeds down, back up again will follow up on status"

Closed

 

› 2018-01-21 - 18:00:00 [ VIEW ]

Issue

We will start a maintenance from 18h for our XEN nodes. All the nodes will be patched and rebooted for the latest updates. You can experience some downtime as we need to make reboots.

Followup

• 2018-01-21 20:45:52 : All done.

Closed

 

› 2018-01-12 - 00:08:09 [ VIEW ]

Issue

DCO rack B18 network / switch issue

Followup

• 2018-01-12 00:29:27 : All online for 20 minutes, patching install with new software fix/update
• 2018-01-12 00:35:19 : Checking what update does

Closed

 

› 2017-12-29 - 02:31:52 [ VIEW ]

Issue

Incomming attack, mitigating

Followup

• 2017-12-29 02:34:41 : Mitigated, still going one but blocked, there was some ping loss. NTP ampli attack

Closed

 

› 2017-12-17 - 10:47:13 [ VIEW ]

Issue

XEN01 issue, rebooting, same issue as xen02 some days ago, waiting on citrix for patch

Followup

• 2017-12-17 10:47:37 : Disk check started

Closed

 

› 2017-12-12 - 20:07:11 [ VIEW ]

Issue

SUpermicro VM node XEN02 issue, checking (locked, rebooting takers 2-3 minutes)

Followup

• 2017-12-12 20:13:41 : Storage disk check slows the boot down, will keep this running to avoid any data loss
• 2017-12-12 20:14:39 : Disk check done, vm's booting. will investigate afterwards, first check if all is booted
• 2017-12-12 20:16:15 : Support log send to citrix to check

Closed

 

› 2017-11-28 - 19:25:20 [ VIEW ]

Issue

Rack B15 network issue (switch?)

Followup

• 2017-11-28 19:35:41 : ARP overflow, fixed

Closed

 

› 2017-11-15 - 22:26:09 [ VIEW ]

Issue

Switch b15 issue, checking

Followup

• 2017-11-15 22:30:42 : Driving to the datacenter, will replace / check switch b15(SW02)
• 2017-11-15 23:49:39 : Switch replaced 10 minutes ago, try to find issue on original switch as this was still running. All up
• 2017-11-16 00:38:16 : Find a software issue on this switch, after update fixed. Will do this also on the second switch in this rack.

Closed

 

› 2017-10-29 - 15:56:21 [ VIEW ]

Issue

Network issue, core routing keeps rebooting, checking

Followup

• 2017-10-29 16:14:33 : Stable for now, will inspect on site
• 2017-10-29 16:21:45 : Driving to the datacenter, will replace core
• 2017-10-29 17:59:00 : Will replace within 10 minutes, was stable for over an hour
• 2017-10-29 18:08:04 : Replaced
• 2017-10-29 23:29:41 : All perfectly stable

Closed

 

› 2017-10-07 - 20:00:00 [ VIEW ]

Issue

Intervention planned on uplinks, max downtime 5 minutes

Followup

• 2017-10-07 20:10:50 : Intervention done

Closed

 

› 2017-09-26 - 00:13:20 [ VIEW ]

Issue

Network issue from some sources, seems like routing issue, investigating

Followup

• 2017-09-26 00:21:35 : Network provider is investigating
• 2017-09-26 00:32:29 : Fixed, back up from multiple locations (telenet did not go down)
• 2017-09-26 00:37:10 : Feedback from out network provider about this issue, stable for the last 5+ minutes: "issues on 2 lines where giving partial routing, all routes transferred"

Closed

 

› 2017-09-24 - 14:57:16 [ VIEW ]

Issue

DCO VPS node XEN04 issue, checking

Followup

• 2017-09-24 14:59:33 : No remote access anymore to the xen node. ETA 45
• 2017-09-24 15:31:35 : Arrived , checking
• 2017-09-24 15:34:42 : Network lock in Citrix zen. Exporting debug file for Citrix support and doing reboot
• 2017-09-24 15:37:02 : Zen 04 online, booting up vm's after disk check
• 2017-09-24 15:41:14 : All online and checked, support file send to citrix

Closed

 

› 2017-09-15 - 02:15:53 [ VIEW ]

Issue

Network issue, checking

Followup

• 2017-09-15 02:17:30 : Incomming attack and packet loss, try to mitigate it
• 2017-09-15 02:18:38 : Attack blocked

Closed

 

› 2017-08-11 - 09:26:25 [ VIEW ]

Issue

Rack B15 power unplugged from xen03 ETA 45 min (client)

Followup

• 2017-08-11 09:40:39 : Solved, client connected power again

Closed

 

› 2017-07-07 - 11:35:53 [ VIEW ]

Issue

#dcostatus Momenteel ondervinden we problemen met een upstream partner. We werken aan de rerouting!

Followup

Closed

 

› 2017-07-01 - 23:52:57 [ VIEW ]

Issue

Issue checking

Followup

• 2017-07-01 23:54:00 : Incomming attack try to mitigate
• 2017-07-02 00:05:54 : Will contact upstream to solve this
• 2017-07-02 00:15:51 : Will be solved in a few minutes upstream
• 2017-07-02 00:18:48 : Solved, back stable, following up

Closed

 

› 2017-06-12 - 13:03:27 [ VIEW ]

Issue

Firewall issue at dco

Followup

• 2017-06-12 13:12:47 : Rules in place to handle more PPS

Closed

 

› 2017-06-11 - 22:29:04 [ VIEW ]

Issue

DCO network issue, chekcing

Followup

• 2017-06-11 22:30:55 : incomming attack mitigated
• 2017-06-11 23:04:24 : Again issues, checking
• 2017-06-11 23:06:12 : back online, blocked customer

Closed

 

› 2017-05-21 - 21:18:43 [ VIEW ]

Issue

Networking issue, checking

Followup

• 2017-05-21 21:21:00 : Back online, auto mitigated attack, checking

Closed

 

› 2017-05-15 - 12:25:26 [ VIEW ]

Issue

Issue checking

Followup

• 2017-05-15 12:27:24 : Our monitoring from France detected an issue, but from Belgium and The Netherlands everything ok. Checking if our monitoring (external) network has issue or an peering issue
• 2017-05-15 12:29:10 : Our monitoring network (remote) has an issue also to other locations, there where no issues in DCO

Closed

 

› 2017-04-27 - 20:57:40 [ VIEW ]

Issue

Networking issue, checking

Followup

• 2017-04-27 20:59:00 : incomming attack mitigated
• 2017-04-27 21:02:26 : All stable now for 5 minutes

Closed

 

› 2017-03-31 - 23:06:21 [ VIEW ]

Issue

Packet loss in network, checking

Followup

• 2017-03-31 23:09:54 : Incoming attack to network, mitigating
• 2017-03-31 23:38:43 : Attack blocked
• 2017-04-01 13:05:25 : Blocked attack destination
• 2017-04-01 13:35:39 : Upstram blocking ETA 5 min
• 2017-04-01 13:46:48 : All back online, blocked upstream. Customer banned from network

Closed

 

› 2017-03-26 - 01:25:42 [ VIEW ]

Issue

xen03 vps node issue ETA 50 min

Followup

• 2017-03-26 03:22:58 : locked during snapshots / backups, implementing patch
• 2017-03-26 03:35:44 : Testing patch, all vm's on xen03 already online

Closed

 

› 2017-03-18 - 00:04:52 [ VIEW ]

Issue

Network issue, checking, some traffic location do not work (telenet / belgacom working)

Followup

• 2017-03-18 00:07:38 : Traffic rising again, more and more locations back available, waiting on reply DCO / network provider
• 2017-03-18 00:20:20 : All back normal, keep monitoring, traffic back normal level

Closed

 

› 2017-02-28 - 08:09:50 [ VIEW ]

Issue

Issue, checking

Followup

• 2017-02-28 08:11:21 : Global network issue from our network provider
• 2017-02-28 08:13:55 : DCO working on the issue
• 2017-02-28 08:17:52 : Back ping
• 2017-02-28 08:18:08 : All back online, waiting for a response from our network provider what went wrong
• 2017-02-28 08:21:52 : Report received: #dcostatus network issues at colt, workaround enabled. All networks should work now.
• 2017-02-28 08:22:00 : All stable for 10 min, monitoring

Closed

 

› 2017-02-04 - 22:33:34 [ VIEW ]

Issue

Network issue, checking

Followup

• 2017-02-04 22:36:18 : High incoming traffic, DDoS attack
• 2017-02-04 22:38:40 : IP's banned, ping normal

Closed

 

› 2017-01-01 - 01:06:27 [ VIEW ]

Issue

Issue, checking

Followup

• 2017-01-01 01:09:38 : No connection with our core routing. Driving to the datacenter, ETA 30 min
• 2017-01-01 01:35:01 : Leap second issue.
• 2017-01-01 01:09:59 : All online, implementing fix
• 2017-01-01 01:44:10 : All patches done, testing one last time, simulating leap second after boot
• 2017-01-01 01:45:04 : All ok en online. "Happy" new year!

Closed

 

› 2016-11-06 - 11:32:50 [ VIEW ]

Issue

Network issue from some networks, checking

Followup

• 2016-11-06 11:43:24 : Issue in the datacenter, our network provider
• 2016-11-06 11:47:38 : All online now, looks like routing issue, more info will follow once we got an update from our network provider
• 2016-11-06 11:57:32 : All stable now for 10 minutes, traffic loads normal. During the outage we only received around 1/4'the of the normal traffic. So there was a routing issue that is solved now.
• 2016-11-06 12:01:01 : Confirmation from DCO "#DCOSTATUS issues with one router, is excluded now from the routing tables. Will follow up on replacing the router and uplinks"

Closed

 

› 2016-10-30 - 07:36:58 [ VIEW ]

Issue

Xen01 issue, checking

Followup

• 2016-10-30 07:46:17 : Networking lock, reboot
• 2016-10-30 07:52:59 : Server booting
• 2016-10-30 07:53:04 : All VM's booting, uploading support file to Citrix Xenserver to debug issue / hangupd

Closed

 

› 2016-07-10 - 08:15:12 [ VIEW ]

Issue

checking xen01 node

Followup

• 2016-07-10 08:27:21 : No network anymore on xen01 node for unknown reason, reboot done, will send support file to Citrix to identify issue
• 2016-07-10 08:31:06 : All started , support file send to citrix to check (new supported hardware!)

Closed

 

› 2016-06-27 - 23:53:04 [ VIEW ]

Issue

Network issue. Checking

Followup

• 2016-06-27 23:54:51 : All ok, large attack blocked

Closed

 

› 2016-06-07 - 08:00:00 [ VIEW ]

Issue

DCO planned maintenance (07/06/2016 08:00-12:00) on the UPS feeds. No interruptions expected.

Followup

• 2016-06-07 08:42:02 : B05 done
• 2016-06-07 08:51:21 : B06 done
• 2016-06-07 08:51:30 : B07 done
• 2016-06-07 09:10:03 : B15 klaar
• 2016-06-07 09:21:03 : B18 done
• 2016-06-07 09:28:44 : B24 done
• 2016-06-07 09:36:52 : B26 done
• 2016-06-07 09:44:53 : D06 done

Closed

 

› 2016-05-26 - 20:33:48 [ VIEW ]

Issue

DCO network issue, checking

Followup

• 2016-05-26 20:35:26 : Incomming DDoS attack, trying to mitigate
• 2016-05-26 20:38:32 : Our uplinks 100% full. Contacting our provider to try to mitigate this on there side
• 2016-05-26 20:46:05 : Working on it
• 2016-05-26 20:51:59 : Adding filters
• 2016-05-26 20:59:36 : Almost done, will be soon online
• 2016-05-26 21:01:51 : Solved, client blocked

Closed

 

› 2016-05-25 - 02:39:58 [ VIEW ]

Issue

Network issue. Checking

Followup

• 2016-05-25 02:41:50 : From telenet lines there are issues to multiple locations, not only us (looks like a telenet.be issue)

Closed

 

› 2016-05-25 - 02:35:30 [ VIEW ]

Issue

CloudFlare issue on fusa.is website from telenet belgium

Followup

Closed

 

› 2016-05-18 - 22:04:43 [ VIEW ]

Issue

DCO RACK B18 power fuse tripped. Driving to the datacenter to check this. No other racks impacted

Followup

• 2016-05-18 23:10:19 : Impacted customers where called and issue solved. Client his storage switch power supply broken.

Closed

 

› 2016-05-18 - 20:00:02 [ VIEW ]

Issue

DCO network issue, checking

Followup

• 2016-05-18 20:02:51 : No more access to our infrastructure. Driving to the datacenter to fix the issue. ETA 40 min
• 2016-05-18 20:07:57 : All back online
• 2016-05-18 20:12:15 : There was a big incomming attack to a customer. The system did block this but the attack flooded our uplinks.

Closed

 

› 2016-05-01 - 00:33:27 [ VIEW ]

Issue

Network issue. Checking

Followup

• 2016-05-01 00:36:35 : Our network provider has issues. Waiting for more information
• 2016-05-01 00:38:28 : Network provider will provide us an update soon.
• 2016-05-01 00:40:56 : Traffic flowing again. Will update cause
• 2016-05-01 00:47:37 : Ping ok from all networks. Traffic normal
• 2016-05-01 01:01:11 : Back loss
• 2016-05-01 01:02:13 : Back stable. We keep monitoring this
• 2016-05-01 01:06:27 : All stable again. Update from our network provider "#DCOSTATUS issus with 2 uplinks, are one top of it"
• 2016-05-01 02:25:54 : Back an interuption
• 2016-05-01 02:30:45 : Back online. Monitoring

Closed

 

› 2016-04-12 - 16:54:43 [ VIEW ]

Issue

DCO rack B24 issues, checking

Followup

• 2016-04-12 16:56:52 : Driving to the datacenter ETA 17u40
• 2016-04-12 17:45:55 : Power supply broken of switch. Replaced with spare.
• 2016-04-12 17:46:38 : All online. Will replace with a new switch this night. Around 23h. For now everything online.
• 2016-04-12 17:50:16 : All ok. Around 23 we will place a new switch. We will switch this and you can expect 1 or 2 disconnects for a few seconds.

Closed

 

› 2016-04-06 - 01:51:54 [ VIEW ]

Issue

Network issue. Checking

Followup

• 2016-04-06 01:54:23 : Issue with our network provider only ICMP
• 2016-04-06 01:56:49 : DCO working on it received reply from them
• 2016-04-06 01:59:15 : Will update here once we know more from the DCO team about this issue
• 2016-04-06 02:16:14 : Traffic back flowing, update with cause in a minute
• 2016-04-06 02:18:53 : Update from DCO "core router fail, all traffic diverted", more information will be added if available. All should be stable again but will monitor this and inform how to prevent this
• 2016-04-06 02:26:58 : Traffic levels back to normal
• 2016-04-06 02:32:47 : Update from DCO "All traffic back online, will see physical core router to get status of it. currently one transit link of tree down", all traffic normal now, will update if we know more and how to prevent it
• 2016-04-06 08:25:19 : Update from dco "all transit links are up again, one core router will be replaced in next days due to memory issues"

Closed

 

› 2016-04-05 - 10:02:10 [ VIEW ]

Issue

VPS07 N166 disk replacement within 10 minutes

Followup

• 2016-04-05 10:09:20 : Replaced
• 2016-04-05 11:11:58 : Restoring from snapshot, raid array locked.

Closed

 

› 2016-03-27 - 00:50:19 [ VIEW ]

Issue

Small network interruption, checking why. Back ok within 5 minutes.

Followup

• 2016-03-27 00:50:38 : Enabled more logging and we will investigate what happend to avoid this happing again.

Closed

 

› 2016-03-23 - 03:24:57 [ VIEW ]

Issue

Attack to fusa domain no client impact, our site could experience some issues

Followup

• 2016-03-23 03:27:09 : Attack to fusa domain blocked, still going on but blocked for now. Could be showing some timeouts on our site in the next hour.
• 2016-03-23 03:29:12 : The attack is/was with WordPress pingback function to our domain. So please wordpress users secure your pingback option in wordpress. More information on : https://blog.sucuri.net/2014/03/more-than-162000-wordpress-sites-used-for-distributed-denial-of-service-attack.html

Closed

 

› 2016-03-11 - 09:48:07 [ VIEW ]

Issue

DCO2 no network from provider / issue checking

Followup

• 2016-03-11 09:49:47 : Waiting for DCO for solution and reply
• 2016-03-11 09:54:55 : DCO reply, solution within 5 min
• 2016-03-11 09:58:42 : DCO is working on a solution
• 2016-03-11 10:00:24 : Ping reply
• 2016-03-11 10:00:43 : All back ok, checking
• 2016-03-11 10:02:00 : All stable, some BGP issues with our provider

Closed

 

› 2016-02-27 - 14:42:49 [ VIEW ]

Issue

Network issue. Checking

Followup

• 2016-02-27 14:43:56 : Our network provider has an issue. Contacting them
• 2016-02-27 14:47:08 : Back online

Closed

 

› 2016-02-21 - 13:30:23 [ VIEW ]

Issue

VPS01 hardware raid issues low performance issues. Checking

Followup

• 2016-02-21 13:36:06 : Reboot doesn't solve issues, migrating machines to other vps node
• 2016-02-21 13:47:45 : Migration 80% done.

Closed

 

› 2016-01-24 - 14:17:05 [ VIEW ]

Issue

DCO switch in rack B07 (SW1) has an issue, checking

Followup

• 2016-01-24 14:19:02 : Switch flapping port, driving to the datacenter for replacement
• 2016-01-24 14:24:55 : ETA 40 min with replacement switch
• 2016-01-24 15:02:34 : Arrived
• 2016-01-24 15:13:04 : Replacement switch also failed, bad cable / network / server in the ports
• 2016-01-24 15:23:20 : Narrowed down to 4 cables
• 2016-01-24 15:37:01 : Replacing 2 network cables.
• 2016-01-24 15:40:20 : Cables replaced, pluggin them back in
• 2016-01-24 15:52:30 : All online, will still replace the switch to be 100% safe in a couple of minutes
• 2016-01-24 16:37:56 : All already OK for 40 minutes. If there are still issues contact support. Double checking everything now (connections to servers). This issue was related to 18 ports in rack b07

Closed

 

› 2016-01-05 - 16:12:16 [ VIEW ]

Issue

Network large flood, checking

Followup

• 2016-01-05 16:16:09 : Blocking ddosed IP
• 2016-01-05 16:19:56 : back to normal, ip block helped
• 2016-01-05 16:24:36 : All stable for 5 minutes, attack gone

Closed

 

› 2016-01-04 - 04:07:14 [ VIEW ]

Issue

Issue checking

Followup

• 2016-01-04 04:08:12 : Incoming flood to our network
• 2016-01-04 04:18:29 : Our script blocked the DDoS source, took 10 minutes, try to improve this
• 2016-01-04 04:18:50 : All normal now, keep monitoring and checking to optimize this
• 2016-01-04 04:24:09 : Changed parameters detection

Closed

 

› 2015-12-26 - 17:57:51 [ VIEW ]

Issue

High ping / timeouts for 1 minute. our filter mitigated this DDoS attack, still monitoring

Followup

Closed

 

› 2015-12-12 - 15:22:32 [ VIEW ]

Issue

Vps05 failed drive. Disk replace in 45 minutes. Impact 10min for customers on server. Also kernel upgrade

Followup

• 2015-12-12 15:52:39 : Intervention started vps05
• 2015-12-12 15:57:13 : Disk ok, re-added to array, testing
• 2015-12-12 15:57:44 : All online rebuilding in background on vps05

Closed

 

› 2015-11-13 - 21:08:01 [ VIEW ]

Issue

Issue checking

Followup

• 2015-11-13 21:12:14 : Network issue on our side, checking
• 2015-11-13 21:13:06 : Incomming flood stopped, b locked
• 2015-11-13 21:20:26 : All normal for 7 min, keep monitoring

Closed

 

› 2015-10-12 - 03:18:43 [ VIEW ]

Issue

vps07 raid issue, writing errors, replacement needed

Followup

• 2015-10-12 04:21:30 : All vm's back online from snapshot

Closed

 

› 2015-10-11 - 23:37:04 [ VIEW ]

Issue

web02 server issues

Followup

• 2015-10-11 23:42:21 : web02 disk filesystem errors
• 2015-10-11 23:42:28 : Testing ext4 disk check for fix
• 2015-10-11 23:52:21 : Should be back online.
• 2015-10-12 00:23:56 : Importing backup
• 2015-10-12 00:24:07 : Again disk issue, other vm's on the system has no issues, checking why
• 2015-10-12 02:25:52 : Restore 95% done
• 2015-10-12 02:49:06 : Same result with snapshot restore. Restoring from Bacula
• 2015-10-12 03:18:13 : This is related with vps07 disk issue
• 2015-10-12 06:03:18 : All back normal, restored from backup (raid failure)

Closed

 

› 2015-09-19 - 12:56:40 [ VIEW ]

Issue

Issue checking

Followup

• 2015-09-19 12:58:09 : High network traffic. Checking
• 2015-09-19 12:59:35 : All normal. Follow up. High flood received
• 2015-09-19 12:59:55 : Attacked ip blocked
• 2015-09-19 13:05:06 : Monitoring script for attacks now at an interval of 2 minutes instead of 5 minutes. This attack was blocked within 10 minutes. We try our best to improve this. Still monitoring, but all should be back online, normal latency

Closed

 

› 2015-08-24 - 21:55:43 [ VIEW ]

Issue

Ping loss. High latency. Checking

Followup

• 2015-08-24 21:57:53 : Core network issue. Checking
• 2015-08-24 22:03:16 : Better now. Ddos
• 2015-08-24 22:03:27 : Incoming ddos. Filtering n ow
• 2015-08-24 22:20:43 : Fully mitigated. Customer blocked

Closed

 

› 2015-07-03 - 15:57:06 [ VIEW ]

Issue

Issue, checking

Followup

• 2015-07-03 16:00:46 : Global network issue inside all-our racks
• 2015-07-03 16:06:47 : On site in 5 minutes
• 2015-07-03 16:17:09 : All online, core issue. Still investigating to prevent this
• 2015-07-03 16:42:36 : Enabled debugging to find the issue. Will update this NOC once we know more, also contacted hardware/software manufacturer for further investigation

Closed

 

› 2015-06-29 - 01:25:26 [ VIEW ]

Issue

Issue in part of the network / rack, checking

Followup

• 2015-06-29 01:25:36 : Issue in rack B06
• 2015-06-29 01:29:13 : B06 second sw02 affected
• 2015-06-29 01:35:44 : Engineer onsite within the hour
• 2015-06-29 01:59:01 : Problem isolated
• 2015-06-29 02:04:38 : Issue now over more racks, checking
• 2015-06-29 02:47:17 : Stable all, Client removed. A client misconfigured something and sended a lot of packages to our network, this was blocked on sw02 in B06 but after a while the client used the 'failover' link. This was propaged to the main router. Investigating why it is not stopped by the switch. Still on site monitoring

Closed

 

› 2015-06-23 - 04:40:56 [ VIEW ]

Issue

DCO issue vps N7 issue, checking

Followup

• 2015-06-23 04:44:41 : vps node N7 is down for unknown reason, drive to the datacenter ETA 1 hour
• 2015-06-23 06:09:51 : Xen kernel error. Updating related driver
• 2015-06-23 06:15:09 : Update done, vps node N7 online

Closed

 

› 2015-02-25 - 17:04:39 [ VIEW ]

Issue

vps node issue, checking

Followup

• 2015-02-25 17:08:12 : vpsnode 03 reboot unknown reason. Checking. Only vm's impacted on that node
• 2015-02-25 17:12:47 : Posted support file to citrix xenserver for checkup, all online

Closed

 

› 2015-01-28 - 14:20:57 [ VIEW ]

Issue

Network loss from telenet BE to interoute, interoute informed by our network provider, no issues on our side

Followup

• 2015-01-28 14:24:39 : Confirmed issue between telenet and interoute (remote network) also issues to other providers
• 2015-01-28 14:34:09 : Telenet around 18% ping loss, also to other locations on bnix, interoute....
• 2015-01-28 14:36:53 : Seems fixed by by telenet and bnix....

Closed

 
Auto refresh (60 seconds)
LOGINARCHIVE

 
Follow @fusanoc