Stopped by AT&T ThreatTraq and talked about, you know. Infosec. Good times! Here’s video and a transcript.
You set the rules and you get to CHEAT (with Dan Kaminsky)
AT&T ThreatTraq #143
Brian Rexroad: Hello. Welcome to AT&T ThreatTraq for May 12th, 2015. This program provides network security highlights, discussion, and countermeasures for cyber threats. Today, we’re joined by Dan Kaminsky. Dan, welcome. You know, you’re one that practically needs no introduction, but I understand you’re Chief Scientist at White Ops.
Dan Kaminsky: Mm-hmm.
Brian: And can you tell us a little more about White Ops, and what you do?
Dan: We make sure that people on the Internet are actually people, because sometimes they’re not. Sometimes they’re just machines that have been programmed to run around. We always wondered why all these machines were getting broken into, like how interesting can Grandma’s email be? Well, it turns out you hack a million grandma’s. You click a billion ads, you make a million dollars. So we’re in the business of cleaning up the advertising ecosystem, and dealing with other situations where these automated machines, known as bots, run around and do bad things.
Brian: Right. You know, we’ve talked about click fraud a number of times on this program. And I guess, so that’s really kind of the underpinnings of the work that you’re doing.
Dan: When you rob a bank, the man gets pretty angry. When you rob advertisers, they’re like oh, the numbers are up.
Brian: Right. So it’s a little bit strange the way that – you know, I remember, and I don’t even know what the advertisement was about. But this guy, he’s out on the market. He’s buying clicks. Can I get some clicks? I just need to get through the next quarter.
Dan: I know, right? There’s this great thing by Adobe, just need a few more, just need a few more.
Brian: Right, so that whole notion, it’s true. There’s like perverse motivation that’s built in there, if there isn’t some sort of enforcement mechanism. And that’s what you’re out to do.
Dan: Yeah, we’ve really been changing the market. We built the largest ad fraud study of all time that had been done. It was called the Bot Baseline. It’s at whiteops.com/botfraud. And we really found there’s going to be about five or six billion dollars’ worth of this fraud this year. I mean, this is real money. A lot of money is going, not to people who make actual content that people like, but instead just going to outright fraud search, who just steal. And we’re fixing that.
Brian: So, welcome. We’re glad to have you here, and we’ll be able to talk about some other discussions here today, too.
Dan: All right.
Brian: So, let’s go on. Matt, Matt Keyser’s here. Welcome, Matt.
Matt Keyser: How’s it going?
Brian: And we have online, Jim Clausing. Welcome, Jim.
Jim Clausing: Hey Brian, hey guys.
Brian: I hear it’s been a little hot in Ohio. Did you say a little hot?
Jim: Yeah, it was in the nineties a couple of days. And now, it’s not going to be quite so hot for a couple of days, yeah.
Brian: Right, okay, well good. I’m Brian Rexroad and welcome. And so, what we’ll do first here, Dan, is talk a little bit about what you think some of the security trends are that are coming up.
Dan: All right. the big trend that I think that is going to start up is God, you can just lose all your data really quickly. We keep trying to make it so that no one ever gets anything, but if they get in, it’s the end of the world. And the first big trend that I think is going to start is that’s going to slow down a bit. We are going to figure out architectures that lose money at a – or lose data or money at a “not all of it at once” rate.
And I think we’re going to see that become a real trend in information security.
Brian: So, more and more, I think you’re absolutely right. And you mentioned sort of like ATM’s. You’re limited to what you can take from an ATM. But that next step is well, what if you can go to a hundred ATM’s at the same time, and before those actual transactions go through? So, I think it’s going to the next step.
Dan: Which of course was the thing that happened.
Dan: You know, we had a major ATM thing, where I think it was a couple hundred ATM’s were hit within seconds of each other. And the information, that they shouldn’t all provide the information, ended up getting not distributed fast enough. So it’s like a $17 million loss? And, attackers are quite clever. You have to be able to adapt.
Brian: Yep, be able to adapt. Agility and security is one of the main themes to that capability.
Dan: Yeah, this is not a thing where you’re one and done. No, you’ve got a cat and mouse thing. When we’re out dealing with these ad fraud guys, it is constantly cat and mouse.
Brian: Right. I don’t think it was related to the same subject, but we were chatting a little bit earlier. And you said something about it took hours to find the problem, and then six months to solve it.
Dan: To the point where at the end of it, I’m like man, I didn’t do much at all, as the attacker. That was like a distant memory.
Brian: Yeah, so that’s one of the challenges that we deal with is that the attackers really kind of tend to have the advantage, and it takes a lot more effort to try to solve it, without having derogatory impact in the long run.
Dan: It still has to be performant. It still has to be reliable. It still has to be usable. It still has to be maintainable. It still has to be debuggable. All of these other engineering constraints don’t exist on the offense side, and they take a tremendous amount of work on the defense side.
Brian: Right, right.
Dan: Just a hard problem, but that’s what we signed up for here, so let’s play.
Brian: Yep, good. So what other kinds of trends do you expect?
Dan: Well, we’ve never really been taking all that test code, all those test processes, and merging it in production. Well, in a world of continuous deployment, in a world of repeatedly updating and modifying, and fixing and developing software, test and monitoring is going live as well. And that information stream is turning out to be tremendously useful for security work. There are things that only happen when you’re under attack. There are code paths that are only exposed when there are vulnerabilities.
Not that can be found during test or in isolation, but when actual real world production data starts flowing through. It’s like when you run water through a pipe, you see the water start linking. So, companies like Previty and Signal Sciences, and these guys are actually really starting to see, hey, there’s a lot of data to be extracted from our architecture. Let’s go ahead and use it to basically build more secure operational systems.
Brian: So, are you referring to threat analytics? Or is this like really kind of a nuance of that?
Dan: Just, I think that the actual systems that we run. The code that we deploy to make our companies go, is going to have a lot more of its test and monitoring internals exposed to the Ops guys. And that exposure is going to have a real security tint to it, slant to it.
Brian: So it’s really beyond security. It’s really just a broader set of instrumentation on the systems, so that we have some visibility into what’s going on with them.
Dan: Security is part of operations.
Brian: It is.
Dan: And one of the major customer needs now is it needs to not leak data.
Dan: So, but what I see is we will start getting better signals that we’re going to leak data, that we are leaking data, and especially that we did leak data. Getting monitoring and shrinking that time between compromise and loss is what’s going to happen.
Brian: Why not kind of tie those things together? Now, do you have any thoughts on sort of the tradeoff? One of the cardinal sins that I’ve heard of in the past with software is you leave debugging mode on. And so, there are all kinds of indicators that are in there, and perhaps back doors that are built in. You mentioned the number of lines of code. Adding more lines of code potentially adds more vulnerability. Any thoughts on that?
Dan: Nothing comes for free. It is absolutely the case that as we build out our debugging infrastructure, bugs in the debugging infrastructure can go ahead and hit us. It’s part of the tradeoff. But you know, every copy of Windows in the world sends data back to Microsoft. And you could make the case, you could make the argument, oh my God, look at all this data that Microsoft is taking. What if that data has exploits? You know what? That’s the tradeoff. There might be issues in the data feeds that come back. But in return, they get to know what bugs are in the world and fix them. And it’s part of really of how you create ecosystem that is, if not self-healing, repairable.
Brian: Right, right.
Jim: I mean, we get buried in data as it is sometimes. And if we want to instrument our code better, we’re going to be creating more data.
Dan: Let’s find some guns and see if there’s some smoke coming out of them. Better instrumentation can really go ahead and take right now what’s an ugly problem, and just give you the clean answers. Zane Lackey, who now runs Signal Sciences, and used to run security over at Etsy has this great set of slides. It’s called Attack-Driven Defense, one of my favorite decks of all time. And he’s basically showing, look, here are errors that only happen in the time period after an attacker has found a vulnerability, but before they’ve successfully exploited it.
Dan: These bugs, they only happen when the SQL engine is breaking. If this bug happens, file the bug. They got it to the point where they had splunk auto-filing critical bugs and it was always accurate. That’s where the world is going towards.
Brian: Right, right. You know, and see if I’m interpreting this correctly. I think one of the things that we’ve been finding is that the more and more you can direct your analysis toward anticipated actual attacks, and even understanding the motivation or the types of things that attackers are doing, you know, trending in the environment, will help you to understand what data is really valuable and what’s perhaps just a bit junk.
Dan: You need to realize you’re playing a game. This is player versus player programming. But guess what? It’s your network. You get to set the rules of the game, and you get to cheat. You get to say hey, you guys are playing on my battleground. You’re in my environment. You get to make those rules. So, make them.
Brian: Yeah, very good. Make the rules. So, what are your thoughts on threat data sharing?
Dan: It’s just I think as a trend, where I think we’re going to go, is I think we’re going to go towards a lot more distribution and openness. The data that’s going to be out there about threats is just, we’re just going to have to accept it [being out there]. In some instances, the bad guys know that we know. Because it is worse that the right good guys don’t know. And that is really what we do when we’re talking about open disclosure of vulnerabilities.
You could have a world where we found the five or ten most interesting parties that had a particular vulnerability. And we’re like, you really need to patch your SSL stack. We could do that, and the odds that we would get enough of the SSL stacks are zero. We wouldn’t. If you want to actually fix certain bugs, sometimes you just got to talk openly about it. So I really think that we’re going to see a trend towards what in the past were going to be forms of threat data sharing that we shied away from.
Certainly in the ad space we’ve been talking about, look, there’s some domains. There’s just bots there, and we’re just going to tell you who they are. And at one point, we were like maybe we don’t ever want to share [any of] that. And now, now there’s a realization, we need to start talking about our problems. You can’t manage what you can’t measure, and interestingly, you can’t predict who needs to be able to do the measuring.
Brian: Yep, I think you’re right. You know, one of the reasons we do this program is partly because we feel the need to try to get a larger audience, in terms of understanding what the threats are, what activities are taking place. And you’re right. There’s a tradeoff between making it publicly open versus trying to keep it closed to a closed group.
But as I think you’re pointing out, your opportunity for distribution, if you’re trying to do it in a closed community is much more limited, and the attackers know. They know what attacks they’re performing, and they know what things block them. And so ultimately, they’ve got the insights. We need to try to get the good guys with more insights.
Dan: It doesn’t mean that everything needs to be a big loud hooah about whatever, because sometimes it’s great to just fix things quietly. Let me tell ya’, I’ve done the big thing. I like the little thing, too. But I really do think particularly with threat data, we need to really, really start evaluating when we have it, could there be more good done if we were open with it?
Brian: You know, you mentioned – go ahead, Matt.
Matt: Well, I was going to step in and say, what are your thoughts on – if the data is truly open, as you’re saying, and it’s more of a distributed, where everyone can potentially be a source of that threat data, my concern would be vetting of that threat data. I’ve seen good intel, and I’ve seen terrible intel.
Dan: Oh, it’s true.
Matt: And if everybody’s feeding from those, all of the pools at once, you’re going to have your sock, you know, flipping their lid over the number of false positives you keep tripping.
Dan: So raw – I’m mostly referring to raw intelligence –
Dan: – being anything that’s more open. The absolute problem when – you’re entirely correct – when there’s too many sources. There’s too many opportunities for people to inject bad data. There’s a really fun attack class, where what you do – people are saying oh, you know, there’s lots of IP’s on the Internet. So if a few of them are attacking us, let’s just block them outright and you know, whatever, they can go away. And so, what someone does, they pretend to be the Internet’s DNS root servers. The Internet’s DNS root servers are attacking you, so they get blocked. And in eighteen hours, the network goes down. So you’re absolutely right.
It’s just the ability to develop vetted intelligence is seeded on there being the availability of raw intelligence. I can’t tell you how many – there are entire attack classes that are not public. And no one can even start addressing defenses for them, until they become at least somewhat public.
Brian: So that that piece of threat intelligence, you know, you may report, there’s a command and control server at this IP address. But it only worked for that one attack, and that one time, in that one place. And for everybody else, it’s something different. That information is really useless in the threat intelligence world. So those kinds of things, you’re absolutely right.
Dan: Or maybe that means they don’t do the attack in the first place. Maybe they’re afraid they’ll get caught, because they’re seeing themselves getting caught.
Brian: Creating a deterrence is a very positive thing, absolutely. So, bringing it out to the public I think is a very helpful aspect of this. So, what do you think is the solution? How do we get a more open sharing environment?
Brian: You have a good point, yeah. We haven’t tackled that healing challenge yet.
Dan: And it is the kind of thing where I don’t even know if market forces are going to be sufficient in order to fund this. But the value I think to the global economy of really funding, hey, let’s run a defense for six months, and run it on a similar population, only have it be absent. Let’s have a placebo – a control group and an experimental group. Come back in six months and see if there’s a difference in infection rates.
This kind of work is actually a good thing to do, and it is not the kind of thing we’re doing in information security. So I think the path towards any of this stuff working is actually investing in finding out what works and what doesn’t, and it’s going to be expensive. It’s going to be really expensive.
Brian: Yeah. You know, we’ll talk about this, I think in sort of a broader context in a little bit here. So, let’s take a little quick break here. We’ll move over to Jim. And Jim – actually Dan, you had mentioned some work, you were with Microsoft. And Jim’s going to tell us a little bit about some changes in the patching processes. So, tell us about it Jim.
Jim: And he talked about how Microsoft with Windows 10, which is due out later this year, is going to change their patching processes a little bit.
It’s not going to be one big Patch Tuesday every month. For home users, they’re going to start making the patches available as soon as they’re ready, not holding them until the second Tuesday of the month.
So businesses will be able to set their own date, when within the month they want to apply patches. And they can wait a little bit after they’ve been tested out on the guinea pig home users. It’s kind of an interesting change in the way they go about doing things. We’ll have to see how well it works. I mean, it seems to work okay for the Linux distros these days. They release their patches whenever they’ve got them ready.
And one – I think it was the Register article that was explaining this said, this new policy looks sort of like apt-get update and apt-get dist-upgrade, which is how This is a similar kind of thing. You can automate this and do it fairly quickly. So it’s going to be interesting to see how this works out.
As I said, it’s kind of appropriate that we’re mentioning this on Patch Tuesday. Because as I said, there were a whole bunch of new patches again this month, and three of them that Microsoft called critical. And a few more that I probably would have called critical, because they’re remote code exploit. But we’ve discussed that in previous months, so.
Brian: Okay. What are your thoughts, Dan, about the changes in the practices’ area? Do you think this is a good thing?
Dan: Sometimes, patches break a whole bunch of stuff.
Brian: Sometimes, they do. That’s certain.
Dan: If we can at least get to the point where there’s like the bleeding edge, the normal business, and the factory floor that absolutely never needs to change, that’s at least three. And that’s less than there might otherwise be.
Well, I’m going to be honest. It’s a hard problem to patch software, because there’s just so many moving parts. Google got into a ton of trouble when they had made a dependency in Chrome on a new feature in the Linux kernel. But the default Ubuntu kernel didn’t have that feature. So Chrome just stopped working on Ubuntu. And as far as I know, that state continues. So this is the difficulty of software. We are constantly putting things together, and hoping, dreaming, assuming it’s going to work after.
Chrome and Firefox have actually done a very good job of showing that yeah, you can actually really keep updating things. But remember, the dependency in the browser world is you got to work with the latest browsers. There’s an entire team at every major website that makes sure stuff still works. And let me tell you, when stuff is broken, yeah the browser guys try not to, but the web guys go ahead and are there to fix it.
What happens when it’s a business that has, like the IT guy, who’s maintaining some old binary code? There is no source around. That’s the kind of guy who’s like hey, I don’t want any moving parts in my operating system that are surprising me.
Brian: Yeah. So that’s an interesting dynamic, because when you start getting into the business aspects of it, it seems like it’s more going toward the needs of the many kind of thing. Where the needs of the few are kind of – I’m using the Star Trek thing here. Where the needs of the few start to get [managed] but the needs of the few start to get belittled. And so it’s that case where, are there really that many Ubuntu users of Chrome that needed to be out there? Is that really a priority? Or, are they really just satisfying the majority of the users?
Dan: And a lot of companies have tried to go without, and eventually – this stuff comes in waves.
QA does well, but it’s slow. People are like, well let’s just get rid of it. And find out in the field, move fast and break things. And then, they move fast and break things. And things are broken, and it’s really bad. So, we’ll see exactly what ends up happening here. There are processes and procedures where the code you end up putting out, more likely to work in the first place. Or, you put it out in waves. Certainly, one of the big ways that Chrome – you may not notice it, but you are randomly running random future builds of Chrome all the time.
Brian: Right, right.
Dan: And that’s how they find out before they do a production release, is this something that’s going to go break everything? They actually get telemetry back. It really all does come back to telemetry. This is, you know, security engineering problems are, in very serious ways, just more engineering problems.
Brian: And if they don’t get complaints or they don’t get that negative telemetry back, then they can do a broader thing. And they’re not waiting for monthly cycles to do that.
Dan: Microsoft is right.
Brian: Yep, absolutely.
Matt: And maybe you’ll work, and you’re going to come out on the other side okay, or maybe you won’t.
Brian: Well, and in some cases, they can have a volunteer group that does that. You can sign up for, would you like the beta releases of something to evaluate them? But my suspicion is in most circumstances – I can’t speak for Microsoft in this case. But my suspicion is in most cases, it’s like well, let’s try it. And well, you know, if – maybe I need to reboot or something and then –
Dan: Well, it’s a specific style of engineering. Where if there’s a failure, you actually have like local rollback. It’s like hey, this was tried. It didn’t work. Don’t do any damage. And you got to be really careful when you do it, and it makes your patching and it makes your testing more expensive. But the reality is, is someone’s going to be the hamster.
Dan: You need to have the ability to update problems.
You can have an infrastructure that can survive your 1 out of n, where n is unknown, but not impossible. One out of n times, a patch is going to break things. Figure out how to survive it. That was one of the big reasons why Windows update changed the world. It took Windows from a thing where attackers could assume that a bug today was always going to have a large population.
Dan: To one where it was like bugs had a timeline. And once they were going to go, that was like, they’re going to go. And it made things better. It made things a lot better.
Brian: I’m glad you brought that up. I’m not sure I’ve ever mentioned it on this show. But I absolutely agree with you. I think Microsoft really changed things when they did the automatic update. They weren’t the inventors. They were following, I think. But the –
Dan: I think they did it – they were the first ones to do it right at scale. And by that, I mean it wasn’t – updating systems is hard. And forget all the stability issues, although they’re pretty significant. Just secure –
Brian: A large diversity of different systems, yeah.
Dan: And sometimes they’ll be secure, unless there’s a bad guy, unless someone blocks the secure side. Then goes well, I need a patch, so let me get this random code. Oh, look at this, you know.
Brian: That must be better, because it’s not what I’m running now, right?
Dan: I wish you were joking, but that’s totally the design assumption.
Brian: I’m pretending to joke here.
Dan: Of course.
Brian: [So, Paul Vixie.] You’ve worked with him quite a bit, huh?
Dan: Yeah, he jokes, he spent six months in a well with me.
Brian: With a positive outcome.
Dan: We fixed DNS. We fixed a big part of it, to the degree it could be.
Brian: Yeah, that’s good. So, he made a recent proposal. And Matt, maybe you can tell us a little about it, and we’ll talk about it some more.
Matt: And it’s either going to be a thirty – could be anywhere from thirty minutes to an hour, to a week. The time period is something that I think people would still be debating for a long time. But the idea is if anybody can give reasons why this domain should not be able to be used, it would be denied. But it also has – that cool down period means that no one can use it in that time period. So anybody who’s registering large numbers of domains, and immediately using them and throwing them away, will no longer have this advantage.
But, I feel like there’s always edge cases in a system like this, where if you throw a monkey wrench into the flow of things, it will have a bigger impact that maybe you haven’t quite thought about yet. I’m not saying he hasn’t thought about it. But I’m saying I don’t know what it is yet, personally.
Brian: Well, I’m going to ask Jim’s opinion on this. Because I think it was just a week or two ago, Jim, that you talked a little bit about a domain named Generator algorithm that was – what was he using? The exchange rates as one of the feeders into –
Jim: Right. He was using European Central Bank euro exchange rates in their algorithm, yeah.
Brian: So, it sounds to me like this is really a proposal to try to put a deterrence against that sort of thing. That is, if there is a domain named Generator, you would have to have some type of a way to predict what it’s going to be, so that you could get past that wait period, before the domain name could be actually activated. In which case, hopefully somebody else has some knowledge of it, and you’d be able to sway its’ potential use in that malicious activity. So, I don’t know. Do you have any thoughts on this, Dan?
Dan: When Paul says, there’s really not a legitimate use for a lot of these domains that have only been around for thirty seconds, he’s probably right.
Now, third-level domains, people are generating random third-level domains all the time, because there’s all these interesting reasons why you want to have randomized or data contained inside of a DNS label. That has a bunch of legitimate stuff. But second layer stuff, he’s right. You know, when you have something where 99.99999 percent of uses are illegitimate, you got to kind of look askance and say hey, you know, maybe this is where we put some pressure.
Brian: Yeah, this is a terrible analogy but I can’t help but think. When I read it, I was thinking this is what has to happen when you go to buy a weapon. You know, you got to buy a weapon and they say well, we want to make sure you’re not mad, and buying a weapon. Or having some malicious intent planning to rob a bank or something, and buying a weapon. So, but DNS is not a weapon, obviously. But it’s a case that can be – it’s a tool that can be used in nefarious ways.
Dan: Like think about how much money that has gotten paid for like Amazon and Apple, and Microsoft to have their domain names, versus like how much money they made for them. Like, nano-pennies on the dollar.
And that wasn’t going to be the way for AOL or Minitel or all the pre-Internets. But on this Internet, it’s very inexpensive to go on. You don’t pay a gatekeeper tax, and that’s really part of the heart of the success of this Internet. Where things get to be a bit of a headache is low friction for good honest providers is also low friction for fraudsters. And so, a real observation is that the fraudsters are trying to leverage the availability of the DNS, because it is the most available thing available.
They’re leveraging that availability. They’re using stolen funds to buy all these domain names. And maybe there’s an argument that they don’t necessarily have to work quite that quickly. That those who wish to defend themselves should be able to use the age of the domain. And in fact, the domain should take a little while to age into legitimacy.
Brian: Right. Like any good liquor, right?
Dan: Yeah, right?
Dan: That’s right. Jack Daniel’s security.
Brian: Well hopefully, you don’t have to wait twenty years for it, right? I think they age at three years or four years or something. So anyway, we’ve talked a lot about DNS. We’ll go take it a step further here. You’ve been a big proponent of DNSSEC and, but it’s not quite there yet. I don’t know if you know this, but we’ve got a little back and forth. I have sort of some reservations about DNSSEC. But by the same token, let’s talk about it a little bit. Where do we need to go?
Dan: Because we have a law called HIPAA that says if you can’t communicate securely, you can’t communicate at all.
Brian: Right, right. So we haven’t tackled – we really haven’t managed the key management activities.
Dan: It lets you get key material as easy as you get basic connectivity.
Brian: So it sounds like you’re going beyond DNS per se. It’s not necessarily just looking for domain names, but perhaps to use it for key material distribution.
Dan: The whole point of DNSSEC, we the real point is to get security as functional as connectivity. Like, it’s not a coincidence that our lack of DNS in security – like it’s not a coincidence that we have no DNS in security, and we don’t have security to scales. It’s a consequence. That’s why it’s not scaling. You look at what the world would look like for IP connectivity if you didn’t have DNS, and it looks exactly like the nightmare of key management.
And the nightmare of key management is very specifically that it is very difficult to automate. We have to get significant automation in security if we want any hope of solving a lot of our problems with the resources we have available. And where that’s going to ultimately go is we’re going to use the DNS as our cross organizational key store. This is what’s going to happen.
Brian: So, I agree with you thoroughly. We need to do something to improve the security on DNS. And I guess, where my reservation comes in is completely separate from that point. I think it has more to do with the way we went about implementing security for DNS. That is some of the fundamental issues that we deal with on DNS – we were talking a little bit earlier about reflective attacks and the opportunity for using UDP-based protocols in a nefarious way. It’s really just any UDP protocol that has this problem.
Dan: DNS because there’s this record or that record. Who cares what records are? The point is, is the underlying IP layer, and our underlying ability to trace DDoS floods have a problem. That’s where we need to fix.
Brian: Is there a time when we should, if we suspect something, switch to TCP?
Dan: It’s weird about what do we do about the fact that in UDP-based protocols – the problem is, is that with UDP-based protocols, there’s no evidence that the other side actually wants to talk to you.
Dan: We used to have a thing in IP called source quench, where the thing on a generic way could say hey, stop talking to me.
Brian: And it might listen.
Dan: And the answer is to actually start investing in mechanisms for pushback, where we get automation throughout the traceback flow, throughout the shutdown flow, throughout firewalling. And it’s doable, but we got to do it. But protocol design is a mess right now. The real world is like: Hi. The first thing you get to do is route everything over HTTP and probably HTTPS, because there’s something in the middle that might mess with you. Protocol design is sausage engineering in 2015. You really don’t want to know.
Brian: Yeah, to your point. One of my slogans has been that on the Internet, there are no rules. There are generally guidelines.
Brian: And I think that’s fundamentally what we have to sort of overcome. As we really need to kind of lay some groundwork on what are really good practices, and have some means to enforce that. And I think that was one of the topics that you kind of had here is that, you know, how are we going to fix this? Is there a way to really improve our situation from a security standpoint?
Dan: One of the quotes [large financial institutions] told me is we don’t compete on security. Because if any of us get hit, we’re all getting hit.
Dan: There’s significant tooling that everybody needs to exist. And that in some ways as us, and professionals in information security, we’re the only people actually directly exposed to the problem. We’re the people in the muck, to deal with all this stuff. The tools we build to start dealing with it, that stuff needs to be shared a lot more than I think it already is.
Brian: Yeah, sharing a lot more. You know, and in one respect, I think it may be just fundamental information overload for the, you know, your practical human being. That is us, as practitioners in security, we’re paying attention to the security aspects. But for the folks that are not practitioners in security, it’s an overwhelming amount of information that needs to be comprehended, in order to do a completely separate activity.
Dan: What do we do for them? You know, there’s one thing about like building hard things that are hard. But there’s another thing – there’s like building hard things, so that the next guy, it’s easy.
Brian: That’s exactly – build the modularity, so it’s fool-proof.
Dan: You know what? There’s a lot of people out there really who can see a crash, but have no idea, what do I do with the crash? I’ve got 100 crashes. Which are the ones that I need to go ahead and prioritize in the bug database? Because it’s a problem. And Microsoft said, fine, here’s a tool. Type this, it will tell you. And that is the path to follow. How do we find our problems and figure out what makes them easy to solve? That can be open source that’s out there. That can be even just releasing reviews and experiences of commercial products. Like it has to be the stuff that makes things better is widely known to make things better.
Brian: Yeah, you know, an analogy that just came to mind is I wanted to build a shed to store my junk on my property. And I am terrible with a hammer. My solution? Bought a nail gun.
Dan: There you go.
Brian: It’s a tool that made the job easy, and it was only a couple hundred dollars. I love it. Just don’t try to nail on your own.
Matt: But if you had known that you can buy sheds down at Home Depot for around $70, you might have gone that route. But you didn’t have the information yet.
Brian: I bought a kit. So Matt, tell us a little bit about a new kind of rootkit? Is it a new?
Matt: But, you know, it depends on what machine you’ve got. But the thing about it is, a GPU, is it’s a standalone processor that you slide into your machine. It handles graphics functions, but it can also be used for other functions. It is a full on processor. People often use them for doing bitcoin mining, or hash cracking, or other computationally intensive stuff. So someone has written code that runs entirely in there, stores itself to the memory on this card, and is effectively invisible to most antivirus.
So this is a rootkit, so it has the ability to hide other codes. So you might use it in order to hide your malware, which is still running on the CPU. And you can access system memory using DMA, Direct Memory Access. So it’s interesting. Like I was saying, this exists in other forms as well. People have written codes that runs entirely on the controller of a hard disk. So again, if it’s running on a separate machine, and it is a fully separate machine, it doesn’t have the same kind of – your antivirus is not going to be looking for it. Or at least, today’s antivirus is not going to be looking for this.
Brian: It’s almost like an IoT thing, an Internet of Things thing, but it’s just not a network interface. It’s like a PCI interface, for example.
Matt: Standalone GPU’s don’t exist in all hardware. And I think – in all PC’s. And I think that if you want to have malware that’s truly successful, and spreads widely and runs on most platforms, you wouldn’t necessarily limit yourself to hardware that you’re on the fence, as to whether or not most of your targets will have it.
Brian: Would this be kind of specialized to particular GPU’s as well?
Matt: I’m not actually sure about that. And I guess it depends on the architecture of the GPU, and I’m not an expert on them. I would defer to somebody else.
Dan: I’m used to [this question].
Dan: Granted it’s the big one, but all those other ones mutually trust each other. See, the way it works when you’re doing computer engineering is, it’s like man, you know, making the CPU’s spend all this time dealing with this fiddly problem is really inefficient. Let’s take that problem and put it on a dedicated device. And then it’ll just like access memory, and send events saying, I did the job. So you compromise the external device, and you get all the access, and you don’t have to deal with all that pesky inspection.
So, it’s of course not limited to GPU’s. It Most likely, it’ll have to be customized. There are two things you’re trying to do when you operate off the main CPU. One, you’re trying to evade detection during that particular boot. Potentially, there’s dedicated memory that no one can see you’re running, that you’re pulling, that you’re doing stuff.
You’re also trying to achieve persistence. There’s a reason why there are facilities. And if a machine is compromised, you throw it out. It’s a very expensive solution to the problem, but it’s also the only way to be sure.
Brian: You know, rsync is basically a tool to be able to synchronize files between two systems. It’s oftentimes used for a backup tool, or to be able to basically redundant systems. There’s a good possibility that some activities are taking place across the Internet. It’s actually a single source in China that’s doing most of this probing activity. They’re also probing a variety of other ports. I didn’t try to enumerate those here, but a number of other ports.
It would be indicative of trying to perform penetration activities against systems. So, keep an eye out. If you are using rsync over the Internet, you’ll want to pay attention to that. And even if it’s not intended to be on the Internet, you might want to make sure that it’s not exposed to the Internet.
Jim: The timing of that is interesting, because as I recall, I think back in September or so, there was a vulnerability in rsync on some load balancers. That I don’t know exactly what the timeframe was that that scanning started, but it looked like it might have been back in that vicinity.
Brian: We’re showing 120 days of an activity, and it was actually in the beginning of March that we saw sort of an uptick in this activity.
Well, it turns out that Jordan Wright had done a couple of blogs on this particular topic. One where he was – and this is actually just from yesterday, May 11th, where he had been tracking, over 60 days, watching hackers attack Elasticsearch. In fact, he had found a vulnerability that perhaps is associated with this particular activity. Dan, you had taken a little bit of a look at this. Any comments?
Brian: Yeah, really.
Dan: No, no. Like, there’s remote code execution vulnerabilities. And then where there’s just a field, that’s like, please place the code in that you would like us to execute, to run across this search. And at some point fairly recently, they’re like oh, maybe we should put that into the Java sandbox, which is basically a discredited sandbox. It’s very clear, this thing that’s easy to break out of.
Brian: Yeah, I had a lot of problems .
Brian: Right. You know, to have a feature like this in a closed environment. You know, we talk oftentimes about having layers of defense. And if your only layer of defense is a sandbox, it’s probably not a good defense.
Dan: So it’s a very good point, that this feature’s totally fine if you are running the code as kind of a local thing. But by default, it wasn’t installed on a local thing. It listed on all interfaces on port 9200. And then shockingly now, people are scanning everything on the Internet on 9200. So hopefully, this is getting managed.
Brian: And there is the potential, you know, like we saw with the Bash vulnerability. Where there is the potential that perhaps there is a frontend interface to an Elasticsearch system. Perhaps they’re not scanning port 9200 here, but it could be a web interface that would potentially expose this vulnerability as well, I presume.
Dan: I always like to talk about the most million important lines of code, that are being exposed to attackers, that would cause problems across the global Internet. It should be the same for your organization. It’s okay to use open source. Everyone does and it’s really good stuff. But when there’s problems like this, especially as you say, you know, what you’re doing is doing what John Lambert at Microsoft calls, not thinking in terms of lists, but thinking in terms of graphs. It’s not just what’s exposed on 9200. It’s what’s exposed on 80 that forwards stuff to 9200. Because that is how you find really good attacks.
Brian: Good, from an attacker’s point of view.
Dan: Well, yes.
Brian: Okay. This next – go ahead.
Jim: Yeah, well and the guy, Jordan Wright, who did the blog post that you were looking at a minute ago, also released a Honeypot, Elastichoney that I’m going to throw up on one of our honeypots, and see what we can get out of that. That sits on 9200 and pretends to be Elasticsearch.
Brian: Yeah, very cool. We’ll look forward to some results from that. Next item here is we have flows, packets, and bytes that were off the charts, relatively speaking, on port 53/udp. That’s DNS. We talked a lot about DNS today. We’re showing 30 days of activity. And really what this amounts to, this was actually a reflection attack. And it turned out that this was a reflection using NTP, so the source port is 123. The destination port happened to be 53. So they were targeting –
Dan: Can’t they just pick one?
Brian: Looking at the top ten most probed ports. At the top of the list here, we have port 80. Look at that, port 80 is through the roof. That’s rather unusual here.
And probing by the way, this is looking for sources that are making connections to lots of different addresses on a common port or a handful of ports. And so, we track that activity as probing or scanning activity on the Internet, and it helps us to identify this sort of activity. Port 80 is normally probed quite a bit. It usually shows in the top ten, but not at this proportion. So this is a little bit of an anomaly, a big anomaly that we’ll take a little closer look at. Followed by port 22/tcp, 23/tcp, no surprises there. Port 445, can you believe it? Still conficker on the Internet.
Brian: They appear to be actually sort of a SYN flood against a block of addresses. So it’s actually about, like a slash 23 address block.
And so you see lots of flows from each of these source addresses being thrown to those, on the course of tens of millions of those, of course. So it appears to be a SYN flood against a block of addresses that are located in China. They appear to be associated with video game hosting. So it appears that perhaps somebody has a bit of a beef against them. We don’t have intimate details of that however. Interesting, we’ll call it false positive in the class of probing activity.
Next one here is probes on port 23/tcp. That’s Telnet, and we do have an increase in that. We’re showing 90 days of activity here. And over the last week or so, you can see that there’s been an uptick in that activity. We’re going to take a look at that, in terms of the number of sources doing that probing in a couple of minutes here. And then looking at the – in fact, in a couple of seconds – most sources doing the probing, port 23 at the top of the list. It’s clearly far and above the others, and moved up a couple of places relative to last week. Followed by port 445/tcp.
And then we also have some other ports. We’re going to take a look at port 23 and port 17788 a little bit more closely.
You know, we had identified this as being very indicative of BitTorrent activity, and it appears to be associated with some pirated video content, basically being distributed toward China. The reason I brought this up again, we’ve reported on this a couple of times is that, it appears that whatever activity here, they had a little bit of a disruption in service. And that seems to be a pretty typical – even the folks that are doing bad things have these reliability issues that they have to deal with.
So in any case, that’s our show for today. We’d like to thank you for joining us. And if you’d like to get in touch with us, you can email us at firstname.lastname@example.org. And you can find ThreatTraq on the AT&T Tech channel. It’s att.com/threattraq. It’s on YouTube and on iTunes. You can follow us on Twitter. Our handle is @ATTSecurity. And Dan, your Twitter handle.
Dan: @dakami, D-A-K-A-M-I.
Brian: All right, so we really appreciate your feedback. If you’d like to share your thoughts or questions, we look forward to hearing that. I’d like to thank you, Dan, for joining us today. Very much a pleasure. I really enjoyed speaking with you here today.
Dan: This was a lot of fun.
Brian: Thank you, Matt. Thanks Jim. I’m Brian Rexroad. We’ll be back next week with a new episode. And until then, keep your network safe.