Sandia LabNews

Looking for nefarious intent in the cyberworld


Image of Jeremy Wendt is sharpening the tools needed to foil nefarious spearphishers. (Photo by Randy Montoya)
Jeremy Wendt is sharpening the tools needed to foil nefarious spearphishers. (Photo by Randy Montoya)

The weakest link in many computer networks is a gullible human.

With that in mind, Sandia researcher Jeremy Wendt (5632) is trying to figure out how to recognize potential targets of nefarious emails and put them on their guard.

He’s working to reduce the number of visitors that cyberanalysts have to check as possible bad guys among the tens of thousands who search Sandia websites each day.

His ultimate goal is to spot spearphishing. Phishing is sending an email to thousands of addresses in hopes a few will follow a link and, for example, fall for a scam offering millions of dollars to help a Nigerian prince wire money

out of his country. Spearphishing, on the other hand, targets specific email addresses that have something the sender wants. “Spearphishing is scary because as long as you have people using computers, they might be fooled into opening something they shouldn’t,” Jeremy says. Even if an outsider gets into a Sandia machine that doesn’t have much information, that access makes it easier to get into another machine that may have something, he says.

Jeremy has been working on algorithms that separate web crawlers from people using browsers, and he has been able to split those groups. He believes the work to date will help security because it allows analysts to look at groups separately.

Identifying malicious intent

Cybersecurity’s Roger Suppona (9317) says the ability to identify the possible intent to send malicious content might enable security experts to raise a potential target’s awareness. “More importantly, we might be able to provide sufficient specifics that would be far more helpful in elevating awareness than would a generic admonition to be suspicious of incoming email or other messages,” he says.

Jeremy, in the final stretch of a two-year Early Career Laboratory Directed Research and Development grant, presented his work last year at a Sandia poster session.

He has been looking into behaviors of web crawlers vs. people browsing to see if that matches how computers identify themselves when asking for a webpage. A browser’s computer generally says it can interpret a particular version of HTML — HyperText Markup Language, the main language for displaying webpages — and often gives browser and operating system information. Crawlers identify themselves by program name and version number. A small number Jeremy calls “nulls” offer no identification, perhaps because the programmer omitted that information, perhaps because someone wants to hide.

What Jeremy is looking for is a computer that doesn’t identify itself or says it’s one thing but behaves like another and trolls websites in which the average visitor shows little interest.

Going to an Internet site creates a log of the search. Sandia traffic is about evenly divided between web crawlers and people browsing. Crawlers tend to go all over; browsers concentrate on one place, such as jobs.

Crawlers, also known as bots or robots, are automated and follow links like Google or Bing do. “When we get crawled by a Google bot, we aren’t being crawled by one visitor, we’re being crawled by several hundreds or thousands of different IP addresses,” Jeremy says. An IP or Internet Protocol address is a numerical label assigned to devices on a computer network, identifying the machine and its location.

Distinguishing bots from browsers

Jeremy wants to distinguish bots from browsers without having to trust they are who they say they are. He expects some are lying, so he looked for ways to measure behavior.

The first measurement deals with the fact bots try to index a website. When you type in search words, the web crawler looks for pages associated with those words, disregarding how they’re arranged on a page. That means a bot pulls down HTML files far more often than other things.

Jeremy first looked at HTML downloads. Bots should have a high percentage. Browsers pull down smaller percentages.

More than 90 percent of the nulls pulled down nothing but HTML — typical bot behavior.

A single measurement wasn’t enough, so Jeremy devised a second based on another marker of bot behavior: politeness.

Bots could suck down webpages from a server so fast it would shut down the server to anyone else, Jeremy says. That might prompt the site administrator to block them.

So bots take turns. “They say, ‘Hey, give me a page,’ then they may crawl a thousand other sites taking one page from each,” Jeremy says. “Or they might just sit there spinning their wheels for a second, waiting, and then they’ll say, ‘Hey, give me another page.’”

Browsers go after only one page but want all images, code, and layout files for it instantly. “I call that a burst,” Jeremy says. “A browser is bursty; a crawler is not bursty.” Bursts equal a certain number of visits within a certain number of seconds.

What ‘bursty’ behavior indicates

Ninety percent of declared bots had no bursts and none had a high burst ratio. Sixty percent of nulls also had no bursts, lending credence to Jeremy’s belief they’re bots.

But 40 percent showed some bursty behavior, making them hard to separate from browsers. However, normal browsers behave predictably. When Jeremy combined both metrics, most nulls fell outside those parameters.

That left browsers who behaved like bots. “Now, are all these people lying to me? No. There could be reasons somebody would fall into this category and still be a browser,” Jeremy says. “But it distinctly increases suspicions.”

So he also looked at IP addresses. Unlike physical addresses, IP addresses can change. Say you plug your laptop into the Internet at a coffee shop, which assigns you an IP address. After you leave, someone else shows up and gets the same IP address. So an IP address alone doesn’t necessarily distinguish users.

There’s another identifier: a particular browser on a particular operating system, which leads to what’s called a user agent string. There are thousands of distinct strings.

IP addresses and user agent strings can collide, but Jeremy says odds are dramatically lower that two people will collide on the same IP address and user agent string within a short period such as a day. That tells him they’re probably different people.

Now he needs to bridge the gap between splitting groups and identifying targets of ill-intentioned emails. He has submitted proposals to further his research after the current funding ends this spring.

“The problem is significant,” he says. “Humans are one of the best avenues for entering a secure network.”