Internet Storm Center


News Feeds (RSS) Internet Storm Center

Tue, 23 May 2017 14:12:40 -0600

What did we Learn from WannaCry? - Oh Wait, We Already Knew That!

Tue, 23 May 2017 14:59:46 -0600

In the aftermath of last weeks excitement over the WannaCry malware, Ive had a lot of lessons learned meetings with clients. The results are exactly what youd expect, but in some cases came as a surprise to the organizations we met with.
There was a whole outcry about not victim shaming during and after this outbreak, and I get that, but in most cases infections were process failures that the IT group didnt know they had, these lessons learned sessions have contributed to improving the situation at many organizations.

The short list is below - affected companies had one or more of the issues below:

1/ Patch
Plain and simple, when vendor patches come out, apply them. In a lot of cases, Patch Tuesday means Reboot Wednesday for a lot of organizations, or worst case Reboot Saturday. If you dont have a test the patches process, then in a lot of cases simply waiting a day or two (to let all the early birds test them for you) will do the job. If you do have a test process, in todays world it truly needs to take 7 days or less.
There are some hosts that you wont be patching. The million dollar MRI machine, the IV pump or the 20 ton punch press in the factory for instance. But you know about those, and youve segmented them away (in an appropriate way) from the internet and your production assets. This outbreak wasnt about those assets, what got hammered by Wannacry was the actual workstations and servers, the hospital stations in admitting and emergency room, the tablet that the nurse enters your stats into and so on. Normal user workstations that either werent patched, or were still running Windows XP.

That being said, there are always some hosts that can be patched, but cant be patched regularly. The host thats running active military operations for instance, or the host thats running the callcenter for flood/rescue operations, e-health or suicide hotline. But you cant give just up on those - in most cases there is redundancy in place so that you can update half of those clusters at a time. If there isnt, you do still need to somehow get them updated on a regular schedule.

Lesson learned? If your patch cycle is longer than a week, in todays world you need to revisit your process and somehow shorten it up. Document your exceptions, put something in to mitigate that risk (network segmentation is a common one), and get Sr Management to sign off on the risk and the mitigation.

2/ Unknown Assets are waiting to Ambush You

A factor in this last attack were hosts that werent in ITs inventory. In my group of clients, what this meant was hosts controlling billboards or TVs running ads in customer service areas (the menu board at the coffee shop, the screen telling you about retirement funds where you wait in line at the bank and so on). If this had been a linux worm, wed be talking about projectors, TVs and access points today.

One and all, I pointed those folks back to the Critical Controls list ( ). In plain english, the first item is know whats on your network and the second item is know what is running on whats on your network.

If you dont have a complete picture of these two, you will always be exposed to whatever new malware (or old malware) that tests the locks at your organization.

3/ Watch the News.
.... And I dont mean the news on TV. Your vendors (in this case Microsoft) have news feeds, and there are a ton of security-related news sites, podcasts and feeds (this site is one of those, our StormCast podcast is another). Folks that watch the news knew about this issue starting back in 2015, when Microsoft started advising us to disable SMB1, then again last year (2016) when Microsoft posted their Were Pleading with you, PLEASE disable SMB1 post. We knew specifically about the vulnerabilities used by Wannacry in January when the Shadowbrokers dump happened, we knew again when the patches were released in March, and we knew (again, much more specifically) when those tools went live in April. In short, we were TOLD that this was coming, by the time this was on the TV media, this was very old news.

4/ Segment your network, use host firewalls
In most networks, workstation A does not need SMB access to workstation B. Neither of them need SMB access to the mail server or the SQL host. They do need that access to the SMB based shares on the file and print servers though. If you must have SMB version 1 at all, then you have some other significant issues to look at.
Really what this boils down to is the Critical Controls again. Know what services are needed by who, and permit that. Set up deny rules on the network or on host firewalls for the things that people dont need - or best case, set up denies for everything else. I do realize that this is not 100% practical. For instance, denying SMB between workstations is a tough one to implement, since most admin tools need that same protocol. Many organizations only allow SMB to workstations from server or management subnets, and that seems to work really nicely for them. Its tough to get sign-off on that sort of restriction, management often will see this as a drastic measure.

Disabling SMB1 should have happened months ago, if not year(s) ago.

5/ Have Backups
Many clients found out *after* they were infected by Wannacry that their users were storing data locally. Dont be that company - either enforce central data storage, or make sure your users local data is backed up somehow. Getting users to sign off that their local data is ephemeral only, that its not guaranteed to be there after a security event is good advice, but after said security event IT generally finds out that even with that signoff, everyone in the organization still holds them responsible.

All to often, backups fall on the shoulders of the most Jr staff in IT. Sometimes that works out really well, but all to often it means that backups arent tested, restores fail (we call that backing up air), or critical data is missed.

Best just to back it your data (all your data) and be done with it.

6/ Have a Plan

You cant plan for everything, but everyone should have had a plan for the aftermath of Wannacry. The remediation for this malware was the classic nuke from orbit - wipe the workstations drives, re-image and move on. This process should be crystal-clear, and the team of folks responsible to deliver on this plan should be similarly clear.

I had a number of clients who even a week after infection were still building their recovery process, while they were recovering. If you dont have an Incident Response Plan that includes widespread workstation re-imaging, its likely time to revisit your IR plan!

7/ Security is not an IT thing
Security of the assets of the company are not just an IT thing, theyre a company thing. Sr Management doesnt always realize this, but this week is a good time to re-enforce this concept. Failing on securing your workstations, servers, network and especially your data can knock a company offline, either for hours, days, or forever. Putting this on the shoulders of the IT group alone isnt fair, as the budget and staffing approvals for this responsibility is often out of their hands.

Looking back over this list, it comes down to: Patch, Inventory, Keep tabs on Vendor and Industry news, Segment your network, Backup, and have an IR plan. No shame and no finger-pointing, but weve all known this for 10-15-20 years (or more) - this was stuff we did in the 80s back when I started, and weve been doing since the 60s. This is not a new list - weve been at this 50 years or more, we should know this by now. But from what was on TV this past week, I guess we need a refresher?

Have I missed anything? Please use our comment form if we need to add to this list!

Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Tue, 23 May 2017 01:00:05 -0600

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Mon, 22 May 2017 20:53:02 -0600

A reader sent us an interesting find of a phishing site that is going after Uber credentials. Uber credentials are often stolen and resold to obtain free rides. One method the credentials are stolen is phishing. The latest example is using convincing looking Uber receipt emails. These emails feature a prominent link to then requests the users Uber credentials to log in. Overall, the site uses the expected Uber layout. But more: The site uses a valid SSL certificate.

Turns out that the site was hosted behind a Cloudflare proxy. Cloudflare does issue free SSL certificates, and just like most certificate authorities, it only requires proof of domain ownership to obtain this service. This does make it more difficult to distinguish a fake site from the real thing.

Now by the time I started to investigate this, the original site was already taken down. But there was still some evidence left to see what happened. First of all, passive DNS databases did record the IP address of the site, which pointed to Cloudflare. Secondly, when searching certificate transparency logs, it was clear that a certificate for this site was issued to Cloudflare. Like for all Cloudflare certificates, the certificate was valid for a long list of hostnames hosted by Cloudflare. Sadly, it looks like whois history sites like Domaintools have no record of the site, so we do not know when it was exactly registered, but likely just before the domain started to get used.

Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Mon, 22 May 2017 00:20:03 -0600

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Sat, 20 May 2017 06:01:52 -0600

Typosquatting has been used for years to lure victims You receive an email or visit an URL with a domain name which looks like the official one. Typosquatting is the art of swapping, replacing, adding or omitting some letters to make a domain looking like the official one. The problem is that the human brain will correct automatically what you see and you think that you visit the right site. I remember that the oldest example of typosquatting that I saw was Be honest, at the first time, you read right? This domain was registered in 1997 butit has been taken back by Microsoft for a while. Longer is your domain name, more you have available combinations of letters to generate fake domains. Sometimes its difficult to detect rogue domains due to the font used to display them. Anl looks like a 1 or a 0 looks like an O.

Yesterday, I found a nice phishing email related to DHL (the worldwide courier company). The message was classic: DHL claims that somebody passed by your home and nobody was present. But this time, it was not a simple phishing page trying to collect credentials, there was a link to a ZIP file. The archive contained a malicious HTA file that downloaded a PE file[1] and executed it. Lets put the malware aside and focus on the domain name that was used: a double L).

A quick check reveals that this domain is hopefully owned by DHL (not DHL Express but the Deutsche Post DHL padding:5px 10px"> Domain Name: Registry Domain ID: 123181256_DOMAIN_COM-VRSN Registrar WHOIS Server: Registrar URL: Updated Date: 2016-09-23T04:00:10-0700 Creation Date: 2004-06-22T00:00:00-0700 Registrar Registration Expiration Date: 2017-06-22T00:00:00-0700 Registrar: MarkMonitor, Inc. Registrar IANA ID: 292 Registrar Abuse Contact Email: Registrar Abuse Contact Phone: +1.2083895740 Domain Status: clientUpdateProhibited ( Domain Status: clientTransferProhibited ( Domain Status: clientDeleteProhibited ( Registry Registrant ID: Registrant Name: Deutsche Post AG Registrant Organization: Deutsche Post AG Registrant Street: Charles-de-Gaulle-Strasse 20 Registrant City: Bonn Registrant State/Province: - Registrant Postal Code: 53113 Registrant Country: DE Registrant Phone: +49.22818296701 Registrant Phone Ext: Registrant Fax: +49.22818296798 Registrant Fax Ext: Registrant Email: Registry Admin ID:Admin Name: Domain Administrator Admin Organization: Deutsche Post AG Admin Street: Charles-de-Gaulle-Strasse 20 Admin City: Bon Admin State/Province: - Admin Postal Code: 53113 Admin Country: DE Admin Phone: +49.22818296701Admin Phone Ext: Admin Fax: +49.22818296798 Admin Fax Ext: Admin Email: Registry Tech ID: Tech Name: Technical Administrator Tech Organization: DHL Tech Street: 8701 East Hartford Drive Tech City: Scottsdale Tech State/Province: AZ Tech Postal Code: 85255 Tech Country: US Tech Phone: +1.4089616666 Tech Phone Ext: Tech Fax: - Tech Fax Ext: Tech Email: Name Server: Name Server: DNSSEC: unsigned

The zone is also hosted on the DHL name servers. Thats a good point that DHL registered potentially malicious domains but... if you do this, dont only park the domain, go further and really use it! Its not because the domain has been registered by the official company that bad guys cannot abuse it to send spoofed emails.

First point: or donot resolve to an IP address. If you register such domains, create a website and make them pointto it and log whos visiting the fake page. You can display an awareness message or just redirect to the official site. This will also prevent your customers to land on a potentially malicious site and improve their experience with you.

The second point is related to the MX records. No MX records were defined for the domain. Like with the web traffic, build a spam trap to collect all messages that are sent to * doing this, you will capturetraffic potentially interesting and you will be able to detect if the domain is used in a campaign (ex: you will catchall the non-delivery receipts in the spam trap.

Finally, addan SPF[2] record for the domain. This will reduce the amount of spam and phishing campaigns.

To conclude, registering domain names derived from your companys name is the first step but dont just park them and use them for hunting and awareness!

A quick reminder about the tool dnstwist[3] which is helpful padding:5px 10px"> # docker run -it --rm jrottenberg/dnstwist --ssdeep --mxcheck --geoip _ _ _ _ __| |_ __ ___| |___ _(_)___| |_ / _` | _ \/ __| __\ \ /\ / / / __| __| | (_| | | | \__ \ |_ \ V V /| \__ \ |_ \__,_|_| |_|___/\__| \_/\_/ |_|___/\__| {1.01} Fetching content from: ... 200 OK (396.3 Kbytes) Processing 56 domain variants ................ 48 hits (85%) Original* States SSDEEP:100% Bitsquatting Bitsquatting - Bitsquatting - Bitsquatting States Bitsquatting States MX:localhost Bitsquatting Kong Bitsquatting States Bitsquatting States Bitsquatting States Bitsquatting Bitsquatting Bitsquatting Homoglyph States Homoglyph States Homoglyph States Homoglyph States Homoglyph States Homoglyph Islands Hyphenation States 2400:cb00:2048:1::6818:7c86 Hyphenation States MX:localhost Insertion States Insertion Insertion Insertion - Insertion States Insertion States Insertion States Insertion States Insertion States Insertion States Insertion Insertion - Omission Omission States Omission States Repetition Kong Repetition - Repetition States Replacement States Replacement Replacement States Replacement States Replacement Replacement - Replacement States Replacement Replacement States Replacement States Replacement States Replacement States Subdomain - Subdomain - Transposition States Transposition Various States


Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Fri, 19 May 2017 02:25:03 -0600

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Thu, 18 May 2017 06:55:07 -0600

The massive spread of the WannaCry ransomware last Friday was another good proof that many organisations still fail to patch their systems. Everybody admits that patching is a boring task. They are many constraints that make this process very difficult to implement and... apply!Thats why any help is welcome to know what to patch and when. This is the key:

  • What to patch? What are the applications/appliancesthat are deployed in your infrastructure?
  • When to patch? When are new vulnerabilities discovered?

The classification of vulnerabilities is based on the CVE (or Common Vulnerabilities and Exposures) standard maintained by[1]. To explain briefly, when a security researcher or a security firm finds a new vulnerability, a CVE number is assigned to it (CVE-YYYY-NNNNN). The CVE contains all the details of the vulnerability (which application/system is affected, the severity and many more information). As an example, the vulnerability exploited by WannaCry was %%cve:2017-0143%%.

Those CVE are stored in open databases and many organisations are using them and provide online services like[2]. There are plenty of them that offer almost all the same features but they don width:700px" />

Based on cve-search, I can provide details about new CVEs to my customers or any other organisationsjust by querying the database. Indeed, reading the daily flow of CVE is difficult and useless for many people. They have to focus on what affect them. To help them, Im using a quick padding:5px 10px"> email_contact | days_to_check | output_format | product_definition [ | product_definition ] ...

The script will parse this config file and search for new CVE for each product definition. Results will be sent via email to the specified address.

As I width:700px" />

Of course, the main requirement is to know what you are using on your infrastructure. The information used in the config file describes the products is based on the CPE standard[6] which categorisesapplications, operating systems and hardware devices. This information can be found byNmap. An alternative is touse the following tool on your own network (only!): cve-scan[7]. It scans hosts and searches for vulnerabilities in thecve-search database.

My script is available on my GitHubrepository[5].


Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Thu, 18 May 2017 04:05:03 -0600

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Wed, 17 May 2017 21:56:25 -0600

/. Recently published a post covering a draft NIST Standard that is in review [1]. This handler thought it would cause a disturbance in the force, but so far no one is discussing it. One of the big stand out changes is no more periodic password changes [2]. There are several others as well, and CSO Online has a fantastic summary review [3].

There are some clear differences that stand out right away in the introduction. As with most things, standards evolve as we learn.

padding:5px 10px"> Electronic authentication (e-authentication) is the process of establishing confidence in user identities electronically presented to an information system. E-authentication presents a technical challenge when this process involves the remote authentication of individual people over a network. This recommendation provides technical guidelines to agencies to allow an individual person to remotely authenticate his/her identity to a Federal Information Technology (IT) system. [4]

Digital identity is the unique representation of a subject engaged in an online transaction. A digital identity is always unique in the context of a digital service, but does not necessarily need to uniquely identify the subject. In other words, accessing a digital service may not mean that the physical representation of the underlying subject is known. [2]

The new draft goes on from there to outline digital identity and attempts to clearly define access and uses more risk based language.

One clear change that will shock users is the removal of periodic password changes. The handlers agree that a strong review of this draft is in order for security professionals as we can hear the users now:

Wait, you have been forcing me to change my passwords ALL THIS time, and now your saying it is not needed?

Another section that should be reviewed and discussed deeply is 5.2.7 Verifier Compromise Resistance. According to the section there should be some mechanism to verify compromise resistance. One could interpret this to run passwords against breached credential databases, however this is not specifically called out [2] [3].

In conclusion, this standard is a strong deviation from previous recommendations and should be reviewed for impact to your security practice. (There is that disturbance in the force we were looking for).





(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Wed, 17 May 2017 03:10:03 -0600

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.