ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMN
OPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZAB
CDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOP
QRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCD
EFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQR
STUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEF
GHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRST
UVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGH
IJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUV
WXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJ
KLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWX
YZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKL
MNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ
ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMN
OPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZAB
CDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOC

Please select a datacenter for network and/or power information


DC1

DCO2

RBX

FUSA OK on 2019-07-22 00:03:25, you can check if your provider has an issue:
 

 

› 2019-06-15 - 01:18:25 [ VIEW ]

Issue

There is a network outage, the datacenter is working on it.

Followup

• 2019-06-15 01:19:07 : We're back, traffic flowing normal
• 2019-06-15 01:20:20 : Some routes up, telenet already fine, on other still some issues
• 2019-06-15 01:21:21 : All routes now live. DCO has some issues with there core routing (will update once we have full information)

Closed

 

› 2019-05-21 - 18:19:31 [ VIEW ]

Issue

Network issue, checking. Packet loss to some IP ranges

Followup

• 2019-05-21 18:21:03 : Seems like an overall network issue from the datacenter, ping loss to every provider in DCO
• 2019-05-21 18:21:53 : This was an attack that is filtered now automatically by the mitigation IP GUARDIAN, ICMP was temporary disabled in the network
• 2019-05-21 18:22:02 : All ok, stable, no more ping loss, following up.
• 2019-05-21 18:24:22 : FYI: During the attack, filtering ICMP can be disabled depending on the type of attack. The IP GUARDIAN filtered this attack automatically within 2 minutes, works as espected and does it's job. The IP GUARDIAN filters everything and activates the Arbor gear automatically if there is an abnormality and only passes legit traffic.

Closed

 

› 2019-05-16 - 20:43:46 [ VIEW ]

Issue

Network issue, checking

Followup

• 2019-05-16 20:50:25 : Core firewalling dropped packages, 100% usage for no reason, created support file send to manufacturer, investigating further, all ok
• 2019-05-16 20:52:50 : Installing latest patches
• 2019-05-16 20:54:22 : All done, support file send to manufacturer. CPU was "hanging" on 100% on 'networking' process. Anti-DDoS solution still in front of the network, so it is very unlikely that this was DDoS attack

Closed

 

› 2019-04-22 - 20:40:15 [ VIEW ]

Issue

B15 loop protect, customer creates loops

Followup

• 2019-04-22 20:43:48 : After disabeling link of customer, stable again in B15, this could be the reason one week ago the switch locked
• 2019-04-22 20:54:30 : yes,stable, disabled the customer second port, seems they have a switch issue that triggered this

Closed

 

› 2019-04-22 - 15:23:27 [ VIEW ]

Issue

Network issue, checking

Followup

• 2019-04-22 15:24:53 : Sorry this is an attack, mitigated by the system, ping disabled during attack
• 2019-04-22 15:26:40 : Attack fully filtered , ICMP back on, auto mitigation did his work! Will need to edit our monitoring as we also use ICMP as monitoring why it looked like it was fully offline, but IP GUARDIAN first disables ICMP during attacks

Closed

 

› 2019-04-14 - 12:30:18 [ VIEW ]

Issue

Rack B15 secondary switch shows issues, driving to datacenter with replacement switch

Followup

• 2019-04-14 13:27:49 : After reboot switch back ok, but will replace with new one
• 2019-04-14 13:35:33 : Network issue, checking
• 2019-04-14 13:38:19 : loop in b15, customer created loop
• 2019-04-14 13:40:32 : core fix upgrade,
• 2019-04-14 13:42:39 : All online, emergency bugfix applied for loop protect (SFP) and will now replace switch in B15 secondary
• 2019-04-14 14:04:24 : b15 secondary switch replaced with new

Closed

 

› 2019-04-10 - 23:07:01 [ VIEW ]

Issue

Rack B15 secondary switch shows issues, driving to datacenter with replacement

Followup

• 2019-04-10 23:53:13 : Arrived, checking B15 sw02
• 2019-04-10 23:56:06 : Switch reboot ok, psu Will be replaced
• 2019-04-11 00:03:12 : Replaced with new one. Monitoring on site for 20 minutes

Closed

 

› 2019-04-04 - 19:22:16 [ VIEW ]

Issue

#dcostatus ddos attacks are going on, tonight will be heavy on serveral ranges

Followup

• 2019-04-05 02:13:06 : Attacks going strong, there will be a solution soon !
• 2019-04-05 08:10:36 : Attacks, solution should be there by 10 o clock!
• 2019-04-05 08:18:57 : All online, real solution already in the make, give us time to fix
• 2019-04-05 11:29:07 : Targetted range packet loss, filtering started
• 2019-04-05 11:39:37 : Datacenter now being attacked again, working on it
• 2019-04-05 11:41:50 : #dcostatus new DDoS track with impact dco services
• 2019-04-05 11:58:02 : Less ping loss
• 2019-04-05 12:10:33 : #dcostatus first attack blocked. Now second attack.
• 2019-04-05 12:18:30 : As some of you already know, we and the datacenter are the target of DDoS blackmail, we are working on it to mitigate the attack. We know that you suffer huge losses from this, we also but are working with upstream providers for a solution ASAP. Some protection already build in, but now the datacenter is also the target, we are working together to fix this.
• 2019-04-05 12:47:41 : #dcostatus add. all networks to mitigation
• 2019-04-05 13:05:34 : #dcostatus all routes are to IPGAURDIAN. Attack still busy. But normalisation services.
• 2019-04-05 13:56:38 : FYI: During mitigation ICMP is not possible, update your monitoring!
• 2019-04-05 14:44:44 : As some of you already know, we and the datacenter are the target of DDoS blackmail.
The attacks started 28 and 29 March, with new attacks in the evening of 1 and 3 april.br /> In the evening of 3 April (10:25PM) we received a blackmail from the attacker, he demands us to pay him to stop the attacks.

At this stage we contacted the Cert.be and we're already working with the datacenter and the upstream providers for a real solution.
The initial solution we worked out was announcing the affected range to a dedicated line only, this way the attacks won't affect other ip-ranges.
But the attacker worked this out after some time and started also targetting other ranges.

We've invested in a commercial anti-DDoS protection (IP guardian) for the entire network that should handle big attacks like this.

As we know we encountered a lot of downtime due to those attacks, we already had basic protection up to 10Gig, but the attacks launched by the attacker were bigger and not targetted to one IP address, so nullrouting had no effect at all.

We know that the attacks were really bad for you as a customer for your reputation and customers, we fully understand this but we hope you will see that we did everything we could to protect us and kept it running in the current circumstances. The commercial DDoS solution should be made for this and costs a lot of money, so let's hope it does its job. EDIT: it does!

Colt is fully finetuning the product to protect us, this finetuning will take some time as the attacker changes methods and colt needs to learn the normal traffic patterns.

You can follow us on http://noc.fusa.be/

Closed

 

› 2019-04-04 - 12:31:20 [ VIEW ]

Issue

Again attack, solution worked out

Followup

Closed

 

› 2019-04-03 - 20:58:09 [ VIEW ]

Issue

DDOS again

Followup

• 2019-04-03 21:01:47 : Only 185 affected, all other operational
• 2019-04-03 21:05:53 : 185 back online
• 2019-04-03 21:08:27 : 185 back attack
• 2019-04-03 21:12:00 : 185 back online
• 2019-04-03 21:12:12 : still going on , all other ranges online

Closed

 

› 2019-04-01 - 20:03:01 [ VIEW ]

Issue

Incomming attack to our ip range

Followup

• 2019-04-01 20:12:59 : Attacked ip range filtered, all other traffic normal
• 2019-04-01 20:22:02 : We filtered the attacked ip range, attack again biggen then 10Gbit
• 2019-04-01 20:28:23 : Tomorrow planned meeting for a solution for bigger attacks then our uplinks, we auto block this but once uplinks full we can not handle more. Datacenter will help us with this to mitigate bigger attacks faster without intervention by us.
• 2019-04-01 22:37:38 : Again attack
• 2019-04-02 13:37:00 : Meeting done, together with datacenter we will add a backhaul nullroute route to cogent. This will prevent us from dropping an entire range from BGP to kill the active traffic flow. This way we could mitigate faster and more precise attacks that are bigger then our uplink capacity. This month we will also implement NaWas (https://www.nbip.nl/nawas/) and offer this as service for our entire network to really filter DDoS-attacks and only allow clean traffic

Closed

 

› 2019-03-29 - 20:01:51 [ VIEW ]

Issue

DDoS again on disabled customer ip

Followup

• 2019-03-29 20:24:16 : Only affected range nullrouted, will bring back online asap
• 2019-03-29 20:32:25 : All ranges online of customers, dedicated and shared working on

Closed

 

› 2019-03-29 - 09:48:59 [ VIEW ]

Issue

Again network issue. Checking

Followup

• 2019-03-29 09:53:18 : DDoS, datacenter notified
• 2019-03-29 10:01:45 : #DCOSTATUS Network issue. UPDATE asap
• 2019-03-29 10:02:06 : Datacenter is rerouting IP ranges to avoid other customers impact
• 2019-03-29 10:18:16 : #DCOSTATUS first mitigation done. NOC busy with next steps
• 2019-03-29 10:25:39 : Solution within 30minutes
• 2019-03-29 10:39:56 : bringing up ip ranges
• 2019-03-29 10:50:13 : rerouting last ip range
• 2019-03-29 11:16:07 : All online on dedicated line to prevent impact on other ranges

Closed

 

› 2019-03-28 - 13:58:02 [ VIEW ]

Issue

Network issue, checking

Followup

• 2019-03-28 14:00:02 : Big Incoming DDoS, trying to mitigate, packet loss
• 2019-03-28 14:04:16 : Attack blocked
• 2019-03-28 14:06:20 : All normally, following up
• 2019-03-28 14:08:45 : Attack again, filtering again
• 2019-03-28 14:17:55 : Will do upstream blocking on the attacked IP address
• 2019-03-28 14:18:11 : Network issue. update asap #dcostatus
• 2019-03-28 14:30:38 : #dcostatus Origin problems Found. DDOS mitigation in progress.
• 2019-03-28 14:31:48 : #dcostatus reason found. DDOS mitigation in progress
• 2019-03-28 14:39:52 : DDoS really big now, datacenter is working on it to mitigate and block
• 2019-03-28 14:51:25 : #dcostatus Reason of problems found: DDOS mitigation in progress. We're working quickly on a fix!
• 2019-03-28 15:16:32 : #dcostatus Mitigation still ongoing. Our team is working dedicatedly on a solution.
• 2019-03-28 15:16:35 : back ok, following up
• 2019-03-28 15:29:33 : #dcostatus DDOS under control. Cleaning up now! Update of total fix will follow.

Closed

 

› 2019-02-18 - 10:26:46 [ VIEW ]

Issue

Some ping loss. Checking

Followup

• 2019-02-18 10:31:52 : From some sources packet loss, contacting provider
• 2019-02-18 10:40:37 : Issue upstream , now normal

Closed

 
 
LOGIN - SUBSCRIBE

 
Follow @fusanoc