Menu

Internet Storm Center

Traffic

News Feeds (RSS) Internet Storm Center

Tue, 23 May 2017 14:12:40 -0600

What did we Learn from WannaCry? - Oh Wait, We Already Knew That!

Tue, 23 May 2017 14:59:46 -0600

In the aftermath of last weeks excitement over the WannaCry malware, Ive had a lot of lessons learned meetings with clients. The results are exactly what youd expect, but in some cases came as a surprise to the organizations we met with.
There was a whole outcry about not victim shaming during and after this outbreak, and I get that, but in most cases infections were process failures that the IT group didnt know they had, these lessons learned sessions have contributed to improving the situation at many organizations.

The short list is below - affected companies had one or more of the issues below:


1/ Patch
Plain and simple, when vendor patches come out, apply them. In a lot of cases, Patch Tuesday means Reboot Wednesday for a lot of organizations, or worst case Reboot Saturday. If you dont have a test the patches process, then in a lot of cases simply waiting a day or two (to let all the early birds test them for you) will do the job. If you do have a test process, in todays world it truly needs to take 7 days or less.
There are some hosts that you wont be patching. The million dollar MRI machine, the IV pump or the 20 ton punch press in the factory for instance. But you know about those, and youve segmented them away (in an appropriate way) from the internet and your production assets. This outbreak wasnt about those assets, what got hammered by Wannacry was the actual workstations and servers, the hospital stations in admitting and emergency room, the tablet that the nurse enters your stats into and so on. Normal user workstations that either werent patched, or were still running Windows XP.

That being said, there are always some hosts that can be patched, but cant be patched regularly. The host thats running active military operations for instance, or the host thats running the callcenter for flood/rescue operations, e-health or suicide hotline. But you cant give just up on those - in most cases there is redundancy in place so that you can update half of those clusters at a time. If there isnt, you do still need to somehow get them updated on a regular schedule.

Lesson learned? If your patch cycle is longer than a week, in todays world you need to revisit your process and somehow shorten it up. Document your exceptions, put something in to mitigate that risk (network segmentation is a common one), and get Sr Management to sign off on the risk and the mitigation.

2/ Unknown Assets are waiting to Ambush You

A factor in this last attack were hosts that werent in ITs inventory. In my group of clients, what this meant was hosts controlling billboards or TVs running ads in customer service areas (the menu board at the coffee shop, the screen telling you about retirement funds where you wait in line at the bank and so on). If this had been a linux worm, wed be talking about projectors, TVs and access points today.

One and all, I pointed those folks back to the Critical Controls list (https://www.cisecurity.org/controls/ ). In plain english, the first item is know whats on your network and the second item is know what is running on whats on your network.

If you dont have a complete picture of these two, you will always be exposed to whatever new malware (or old malware) that tests the locks at your organization.

3/ Watch the News.
.... And I dont mean the news on TV. Your vendors (in this case Microsoft) have news feeds, and there are a ton of security-related news sites, podcasts and feeds (this site is one of those, our StormCast podcast is another). Folks that watch the news knew about this issue starting back in 2015, when Microsoft started advising us to disable SMB1, then again last year (2016) when Microsoft posted their Were Pleading with you, PLEASE disable SMB1 post. We knew specifically about the vulnerabilities used by Wannacry in January when the Shadowbrokers dump happened, we knew again when the patches were released in March, and we knew (again, much more specifically) when those tools went live in April. In short, we were TOLD that this was coming, by the time this was on the TV media, this was very old news.

4/ Segment your network, use host firewalls
In most networks, workstation A does not need SMB access to workstation B. Neither of them need SMB access to the mail server or the SQL host. They do need that access to the SMB based shares on the file and print servers though. If you must have SMB version 1 at all, then you have some other significant issues to look at.
Really what this boils down to is the Critical Controls again. Know what services are needed by who, and permit that. Set up deny rules on the network or on host firewalls for the things that people dont need - or best case, set up denies for everything else. I do realize that this is not 100% practical. For instance, denying SMB between workstations is a tough one to implement, since most admin tools need that same protocol. Many organizations only allow SMB to workstations from server or management subnets, and that seems to work really nicely for them. Its tough to get sign-off on that sort of restriction, management often will see this as a drastic measure.

Disabling SMB1 should have happened months ago, if not year(s) ago.

5/ Have Backups
Many clients found out *after* they were infected by Wannacry that their users were storing data locally. Dont be that company - either enforce central data storage, or make sure your users local data is backed up somehow. Getting users to sign off that their local data is ephemeral only, that its not guaranteed to be there after a security event is good advice, but after said security event IT generally finds out that even with that signoff, everyone in the organization still holds them responsible.

All to often, backups fall on the shoulders of the most Jr staff in IT. Sometimes that works out really well, but all to often it means that backups arent tested, restores fail (we call that backing up air), or critical data is missed.

Best just to back it your data (all your data) and be done with it.

6/ Have a Plan

You cant plan for everything, but everyone should have had a plan for the aftermath of Wannacry. The remediation for this malware was the classic nuke from orbit - wipe the workstations drives, re-image and move on. This process should be crystal-clear, and the team of folks responsible to deliver on this plan should be similarly clear.

I had a number of clients who even a week after infection were still building their recovery process, while they were recovering. If you dont have an Incident Response Plan that includes widespread workstation re-imaging, its likely time to revisit your IR plan!

7/ Security is not an IT thing
Security of the assets of the company are not just an IT thing, theyre a company thing. Sr Management doesnt always realize this, but this week is a good time to re-enforce this concept. Failing on securing your workstations, servers, network and especially your data can knock a company offline, either for hours, days, or forever. Putting this on the shoulders of the IT group alone isnt fair, as the budget and staffing approvals for this responsibility is often out of their hands.

Looking back over this list, it comes down to: Patch, Inventory, Keep tabs on Vendor and Industry news, Segment your network, Backup, and have an IR plan. No shame and no finger-pointing, but weve all known this for 10-15-20 years (or more) - this was stuff we did in the 80s back when I started, and weve been doing since the 60s. This is not a new list - weve been at this 50 years or more, we should know this by now. But from what was on TV this past week, I guess we need a refresher?

Have I missed anything? Please use our comment form if we need to add to this list!

===============
Rob VandenBrink
Compugen

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Tue, 23 May 2017 01:00:05 -0600

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Mon, 22 May 2017 20:53:02 -0600

A reader sent us an interesting find of a phishing site that is going after Uber credentials. Uber credentials are often stolen and resold to obtain free rides. One method the credentials are stolen is phishing. The latest example is using convincing looking Uber receipt emails. These emails feature a prominent link to uberdisputes.com.

Uberdisputes.com then requests the users Uber credentials to log in. Overall, the site uses the expected Uber layout. But more: The site uses a valid SSL certificate.

Turns out that the site was hosted behind a Cloudflare proxy. Cloudflare does issue free SSL certificates, and just like most certificate authorities, it only requires proof of domain ownership to obtain this service. This does make it more difficult to distinguish a fake site from the real thing.

Now by the time I started to investigate this, the original site was already taken down. But there was still some evidence left to see what happened. First of all, passive DNS databases did record the IP address of the site, which pointed to Cloudflare. Secondly, when searching certificate transparency logs, it was clear that a certificate for this site was issued to Cloudflare. Like for all Cloudflare certificates, the certificate was valid for a long list of hostnames hosted by Cloudflare. Sadly, it looks like whois history sites like Domaintools have no record of the site, so we do not know when it was exactly registered, but likely just before the domain started to get used.

---
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute
STI|Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Mon, 22 May 2017 00:20:03 -0600

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Sat, 20 May 2017 06:01:52 -0600

Typosquatting has been used for years to lure victims You receive an email or visit an URL with a domain name which looks like the official one. Typosquatting is the art of swapping, replacing, adding or omitting some letters to make a domain looking like the official one. The problem is that the human brain will correct automatically what you see and you think that you visit the right site. I remember that the oldest example of typosquatting that I saw was mircosoft.com. Be honest, at the first time, you read microsoft.com right? This domain was registered in 1997 butit has been taken back by Microsoft for a while. Longer is your domain name, more you have available combinations of letters to generate fake domains. Sometimes its difficult to detect rogue domains due to the font used to display them. Anl looks like a 1 or a 0 looks like an O.

Yesterday, I found a nice phishing email related to DHL (the worldwide courier company). The message was classic: DHL claims that somebody passed by your home and nobody was present. But this time, it was not a simple phishing page trying to collect credentials, there was a link to a ZIP file. The archive contained a malicious HTA file that downloaded a PE file[1] and executed it. Lets put the malware aside and focus on the domain name that was used: dhll.com(with a double L).

A quick check reveals that this domain is hopefully owned by DHL (not DHL Express but the Deutsche Post DHL padding:5px 10px"> Domain Name: dhll.com Registry Domain ID: 123181256_DOMAIN_COM-VRSN Registrar WHOIS Server: whois.markmonitor.com Registrar URL: http://www.markmonitor.com Updated Date: 2016-09-23T04:00:10-0700 Creation Date: 2004-06-22T00:00:00-0700 Registrar Registration Expiration Date: 2017-06-22T00:00:00-0700 Registrar: MarkMonitor, Inc. Registrar IANA ID: 292 Registrar Abuse Contact Email: abusecomplaints@markmonitor.com Registrar Abuse Contact Phone: +1.2083895740 Domain Status: clientUpdateProhibited (https://www.icann.org/epp#clientUpdateProhibited) Domain Status: clientTransferProhibited (https://www.icann.org/epp#clientTransferProhibited) Domain Status: clientDeleteProhibited (https://www.icann.org/epp#clientDeleteProhibited) Registry Registrant ID: Registrant Name: Deutsche Post AG Registrant Organization: Deutsche Post AG Registrant Street: Charles-de-Gaulle-Strasse 20 Registrant City: Bonn Registrant State/Province: - Registrant Postal Code: 53113 Registrant Country: DE Registrant Phone: +49.22818296701 Registrant Phone Ext: Registrant Fax: +49.22818296798 Registrant Fax Ext: Registrant Email: domains@deutschepost.de Registry Admin ID:Admin Name: Domain Administrator Admin Organization: Deutsche Post AG Admin Street: Charles-de-Gaulle-Strasse 20 Admin City: Bon Admin State/Province: - Admin Postal Code: 53113 Admin Country: DE Admin Phone: +49.22818296701Admin Phone Ext: Admin Fax: +49.22818296798 Admin Fax Ext: Admin Email: admincontact.domain@deutschepost.de Registry Tech ID: Tech Name: Technical Administrator Tech Organization: DHL Tech Street: 8701 East Hartford Drive Tech City: Scottsdale Tech State/Province: AZ Tech Postal Code: 85255 Tech Country: US Tech Phone: +1.4089616666 Tech Phone Ext: Tech Fax: - Tech Fax Ext: Tech Email: netmaster@dhl.com Name Server: ns4.dhl.com Name Server: ns6.dhl.com DNSSEC: unsigned

The zone dhll.com is also hosted on the DHL name servers. Thats a good point that DHL registered potentially malicious domains but... if you do this, dont only park the domain, go further and really use it! Its not because the domain has been registered by the official company that bad guys cannot abuse it to send spoofed emails.

First point: dhll.com or www.dhll.com donot resolve to an IP address. If you register such domains, create a website and make them pointto it and log whos visiting the fake page. You can display an awareness message or just redirect to the official site. This will also prevent your customers to land on a potentially malicious site and improve their experience with you.

The second point is related to the MX records. No MX records were defined for the dhll.com domain. Like with the web traffic, build a spam trap to collect all messages that are sent to *@dhll.com.By doing this, you will capturetraffic potentially interesting and you will be able to detect if the domain is used in a campaign (ex: you will catchall the non-delivery receipts in the spam trap.

Finally, addan SPF[2] record for the domain. This will reduce the amount of spam and phishing campaigns.

To conclude, registering domain names derived from your companys name is the first step but dont just park them and use them for hunting and awareness!

A quick reminder about the tool dnstwist[3] which is helpful padding:5px 10px"> # docker run -it --rm jrottenberg/dnstwist --ssdeep --mxcheck --geoip dhl.com _ _ _ _ __| |_ __ ___| |___ _(_)___| |_ / _` | _ \/ __| __\ \ /\ / / / __| __| | (_| | | | \__ \ |_ \ V V /| \__ \ |_ \__,_|_| |_|___/\__| \_/\_/ |_|___/\__| {1.01} Fetching content from: http://dhl.com ... 200 OK (396.3 Kbytes) Processing 56 domain variants ................ 48 hits (85%) Original* dhl.com 199.40.253.33/United States NS:ns4.dhl.com MX:mx1.dhl.iphmx.com SSDEEP:100% Bitsquatting ehl.com 45.33.14.247 NS:pdns03.domaincontrol.com MX:smtp.secureserver.net Bitsquatting fhl.com - Bitsquatting lhl.com - Bitsquatting thl.com 50.57.5.162/United States NS:dns1.name-services.com MX:us-smtp-inbound-1.mimecast.com Bitsquatting dil.com 72.52.4.119/United States NS:ns1.sedoparking.com MX:localhost Bitsquatting djl.com 117.18.11.145/Hong Kong NS:ns1.monikerdns.net Bitsquatting dll.com 68.178.254.85/United States NS:ns43.domaincontrol.com MX:smtp.secureserver.net Bitsquatting dxl.com 69.74.234.98/United States NS:ns59.worldnic.com SPYING-MX:dxl-com.mail.protection.outlook.com Bitsquatting dhm.com 192.241.215.84/United States NS:ns19.worldnic.com MX:dhm.com Bitsquatting dhn.com 62.129.139.241/Netherlands NS:pdns07.domaincontrol.com MX:smtp.secureserver.net Bitsquatting dhh.com 103.241.230.134/India NS:dns1.iidns.com Bitsquatting dhd.com NS:ns-west.cerf.net MX:dhd-com.mail.protection.outlook.com Homoglyph bhl.com 206.188.192.219/United States NS:ns79.worldnic.com SPYING-MX:bhl-com.mail.protection.outlook.com Homoglyph dhi.com 199.36.188.56/United States NS:ns10.dnsmadeeasy.com Homoglyph clhl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Homoglyph dlhl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Homoglyph dihl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Homoglyph dh1.com 208.91.197.27/Virgin Islands NS:ns43.worldnic.com SPYING-MX:p.webcom.ctmail.com Hyphenation d-hl.com 104.24.124.134/United States 2400:cb00:2048:1::6818:7c86 NS:fiona.ns.cloudflare.com MX:mx1.emailowl.com Hyphenation dh-l.com 72.52.4.119/United States NS:ns1.sedoparking.com MX:localhost Insertion duhl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Insertion dhul.com 82.194.88.4/Spain NS:ns1.dominioabsoluto.com Insertion djhl.com 47.89.24.50/Canada NS:f1g1ns1.dnspod.net Insertion dhjl.com - Insertion dnhl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Insertion dhnl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Insertion dbhl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Insertion dhbl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Insertion dghl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Insertion dhgl.com 209.61.212.161/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Insertion dyhl.com NS:dns17.hichina.com MX:mxbiz1.qq.com Insertion dhyl.com - Omission dl.com 104.247.212.218 NS:ns1.gridhost.com SPYING-MX:mail.b-io.co Omission dh.com 54.204.28.210/United States NS:a5-67.akam.net SPYING-MX:mx1.dhltd.iphmx.com Omission hl.com 107.154.105.117/United States NS:ns57.domaincontrol.com MX:mail0.hl.com Repetition ddhl.com 180.149.253.156/Hong Kong NS:ns11.domaincontrol.com SPYING-MX:ddhl-com.mail.protection.outlook.com Repetition dhll.com - Repetition dhhl.com 209.61.212.154/United States NS:ns1.dnsnameservice.com MX:smtp.getontheweb.com Replacement rhl.com 107.161.31.165/United States NS:ns1.hungerhost.com MX:mx.spamexperts.com Replacement chl.com 216.222.148.100 NS:nameserver.ttec.com MX:smtp2.mx.ttec.com Replacement xhl.com 69.172.201.153/United States NS:ns1.uniregistrymarket.link Replacement shl.com 69.171.27.23/United States NS:eu-sdns-01.shl.com SPYING-MX:mxa-0016ba01.gslb.pphosted.com Replacement dul.com 62.129.139.241/Netherlands NS:pdns01.domaincontrol.com MX:smtp.secureserver.net Replacement dnl.com - Replacement dbl.com 198.173.111.6/United States NS:ns53.worldnic.com SPYING-MX:p.webcom.ctmail.com Replacement dgl.com 216.107.145.5 NS:ns62.downtownhost.com MX:dgl.com Replacement dyl.com 99.198.109.164/United States NS:ns-1768.awsdns-29.co.uk MX:mail.dyl.com Replacement dhk.com 98.191.212.87/United States NS:ns1.dhk.com MX:dhk.com.us.emailservice.io Replacement dho.com 75.126.101.248/United States NS:ns1bqx.name.com Replacement dhp.com 199.4.150.5/United States NS:dhp.com MX:mailhub.dhp.com Subdomain d.hl.com - Subdomain dh.l.com - Transposition hdl.com 216.51.232.170/United States NS:ns1.systemdns.com MX:aspmx.l.google.com Transposition dlh.com 212.130.57.148/Denmark NS:ns1.ascio.net SPYING-MX:mail.dlh.com Various wwwdhl.com 199.41.238.47/United States NS:ns.deutschepost.de

[1]https://www.virustotal.com/en/file/f438ba968d6f086183f3ca86c3c1330b4c933d97134cb53996eb41e4eceecf53/analysis/
[2]https://support.google.com/a/answer/33786?hl=en
[3]https://github.com/elceef/dnstwist

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Fri, 19 May 2017 02:25:03 -0600

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Thu, 18 May 2017 06:55:07 -0600

The massive spread of the WannaCry ransomware last Friday was another good proof that many organisations still fail to patch their systems. Everybody admits that patching is a boring task. They are many constraints that make this process very difficult to implement and... apply!Thats why any help is welcome to know what to patch and when. This is the key:

  • What to patch? What are the applications/appliancesthat are deployed in your infrastructure?
  • When to patch? When are new vulnerabilities discovered?

The classification of vulnerabilities is based on the CVE (or Common Vulnerabilities and Exposures) standard maintained by mitre.org[1]. To explain briefly, when a security researcher or a security firm finds a new vulnerability, a CVE number is assigned to it (CVE-YYYY-NNNNN). The CVE contains all the details of the vulnerability (which application/system is affected, the severity and many more information). As an example, the vulnerability exploited by WannaCry was %%cve:2017-0143%%.

Those CVE are stored in open databases and many organisations are using them and provide online services like cvedetails.com[2]. There are plenty of them that offer almost all the same features but they don width:700px" />

Based on cve-search, I can provide details about new CVEs to my customers or any other organisationsjust by querying the database. Indeed, reading the daily flow of CVE is difficult and useless for many people. They have to focus on what affect them. To help them, Im using a quick padding:5px 10px"> email_contact | days_to_check | output_format | product_definition [ | product_definition ] ...

The script will parse this config file and search for new CVE for each product definition. Results will be sent via email to the specified address.

As I width:700px" />

Of course, the main requirement is to know what you are using on your infrastructure. The information used in the config file describes the products is based on the CPE standard[6] which categorisesapplications, operating systems and hardware devices. This information can be found byNmap. An alternative is touse the following tool on your own network (only!): cve-scan[7]. It scans hosts and searches for vulnerabilities in thecve-search database.

My script is available on my GitHubrepository[5].

[1]https://cve.mitre.org
[2]http://www.cvedetails.com/
[3]https://github.com/cve-search/cve-search
[4]https://hub.docker.com/r/rootshell/cvesearch/
[5]https://github.com/xme/toolbox
[6]http://cpe.mitre.org/
[7]https://github.com/NorthernSec/cve-scan

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Thu, 18 May 2017 04:05:03 -0600

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Wed, 17 May 2017 21:56:25 -0600

/. Recently published a post covering a draft NIST Standard that is in review [1]. This handler thought it would cause a disturbance in the force, but so far no one is discussing it. One of the big stand out changes is no more periodic password changes [2]. There are several others as well, and CSO Online has a fantastic summary review [3].

There are some clear differences that stand out right away in the introduction. As with most things, standards evolve as we learn.

padding:5px 10px"> Electronic authentication (e-authentication) is the process of establishing confidence in user identities electronically presented to an information system. E-authentication presents a technical challenge when this process involves the remote authentication of individual people over a network. This recommendation provides technical guidelines to agencies to allow an individual person to remotely authenticate his/her identity to a Federal Information Technology (IT) system. [4]

Digital identity is the unique representation of a subject engaged in an online transaction. A digital identity is always unique in the context of a digital service, but does not necessarily need to uniquely identify the subject. In other words, accessing a digital service may not mean that the physical representation of the underlying subject is known. [2]

The new draft goes on from there to outline digital identity and attempts to clearly define access and uses more risk based language.

One clear change that will shock users is the removal of periodic password changes. The handlers agree that a strong review of this draft is in order for security professionals as we can hear the users now:

Wait, you have been forcing me to change my passwords ALL THIS time, and now your saying it is not needed?

Another section that should be reviewed and discussed deeply is 5.2.7 Verifier Compromise Resistance. According to the section there should be some mechanism to verify compromise resistance. One could interpret this to run passwords against breached credential databases, however this is not specifically called out [2] [3].

In conclusion, this standard is a strong deviation from previous recommendations and should be reviewed for impact to your security practice. (There is that disturbance in the force we were looking for).

[1] https://it.slashdot.org/story/17/05/09/1542236/nists-draft-to-remove-periodic-password-change-requirements-gets-vendors-approval

[2] https://pages.nist.gov/800-63-3/sp800-63b.html

[3] http://www.csoonline.com/article/3195181/data-protection/vendors-approve-of-nist-password-draft.html

[4] http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-63-2.pdf

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Wed, 17 May 2017 03:10:03 -0600

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.