Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security The Military

Army Researching Network System That Defends Against Social Engineering 57

Nerval's Lobster writes "The U.S. Army Research Laboratory has awarded as much as $48 million to researchers trying to build computer-security systems that can identify even the most subtle human-exploit attacks and respond without human intervention. The more difficult part of the research will be to develop models of human behavior that allow security systems decide, accurately and on their own, whether actions by humans are part of an attack (whether the humans involved realize it or not). The Army Research Lab (ARL) announced Oct. 8 a grant of $23.2 million to fund a five-year cooperative effort among a team of researchers at Penn State University, the University of California, Davis, Univ. California, Riverside and Indiana University. The five-year program comes with the option to extend it to 10 years with the addition of another $25 million in funding. As part of the project, researchers will need to systematize the criteria and tools used for security analysis, making sure the code detects malicious intrusions rather than legitimate access, all while preserving enough data about any breach for later forensic analysis, according to Alexander Kott, associate director for science and technology at the U.S. Army Research Laboratory. Identifying whether the behavior of humans is malicious or not is difficult even for other humans, especially when it's not clear whether users who open a door to attackers knew what they were doing or, conversely, whether the "attackers" are perfectly legitimate and it's the security monitoring staff who are overreacting. Twenty-nine percent of attacks tracked in the April 23 2013 Verizon Data Breach Investigations Report could be traced to social-engineering or phishing tactics whose goal is to manipulate humans into giving attackers access to secured systems."
This discussion has been archived. No new comments can be posted.

Army Researching Network System That Defends Against Social Engineering

Comments Filter:
  • So... (Score:5, Interesting)

    by camperdave ( 969942 ) on Thursday October 10, 2013 @12:43AM (#45088467) Journal
    So... Will this system also detect malicious government attacks and bouts of official stupidity?
  • by Anonymous Coward on Thursday October 10, 2013 @12:48AM (#45088495)

    Obligatory :)

    • by geschild ( 43455 )

      Obligatory :)

      My thoughts exactly.

      (I explicitly browsed through all comments to see if this remark was made, yet :)

  • Then maybe they can use it to predict the weather.

    Actually I think the Magic 8 Ball beat 'em to the punch..

    • Well it does actually do a better job of predicting the weather than weathermen, usually, but there is still room for improvement.

  • by Anonymous Coward

    We're gonna light as much as 48 million on fire.

  • Doesn't the NSA already have access to army systems?
  • by ColdWetDog ( 752185 ) on Thursday October 10, 2013 @12:59AM (#45088539) Homepage

    "Skynet was originally activated by the military to control the national arsenal on August 4, 1997, at which time it began to learn at a geometric rate. On August 29, it gained self-awareness, and the panicking operators, realizing the extent of its abilities, tried to deactivate it. Skynet perceived this as an attack and came to the conclusion that all of humanity would attempt to destroy it. To defend itself, Skynet launched nuclear missiles under its command at Russia, which responded with a nuclear counter-attack against the U.S. and its allies. Consequent to the nuclear exchange, over three billion people were killed in an event that came to be known as Judgment Day."

    These things never get launched on time. Good thing the Army is persistent....

  • Work around it (Score:5, Insightful)

    by dutchwhizzman ( 817898 ) on Thursday October 10, 2013 @01:07AM (#45088559)
    So when a social engineer knows such a system is in place, (s)he will devise a way to do their engineering without the system interfering or finding out. It isn't as if there is no protection or detection put on IT systems already. The trick of social engineering is to use the human factor to work around the technical countermeasures to get to their goal. Putting heuristic systems in place that will try to detect if a technical action may be part of a hacking attempt hasn't stopped virus developers from making viruses that successfully circumvent that. That's essentially all that this is, an attempt to do heuristic detection of malware or mal-action, just like a virus scanner.
    • by TheLink ( 130905 )
      The "protection" system could also be a way to DoS the network/services.

      A hacker might trick it into preventing all the humans from doing normal stuff. Or it could be misconfigured or too paranoid or buggy.
    • Re:Work around it (Score:5, Insightful)

      by Another, completely ( 812244 ) on Thursday October 10, 2013 @03:24AM (#45088967)

      Just because a response is possible doesn't mean defense is pointless. The idea is just to make it difficult and risky enough that the payoff isn't worth it.

      If a virus is discovered 99% of the time, then 1% can still cause a lot of damage, and erasing a virus doesn't worry the other virus installations. Detecting and investigating 99% of attempted attacks by people might worry other human attackers.

      It's also easy to test whether a commercial virus scanner will detect a new prototype virus. I expect this system would be stored and used in a way to make it difficult for attackers to acquire a copy for the development and testing of social attacks.

  • Great. What could possibly go wrong? ...

    "I'm sorry, Dave. I'm afraid I can't do that."
    -- HAL 9000

  • by djupedal ( 584558 ) on Thursday October 10, 2013 @01:11AM (#45088573)
    Dave Bowman: Hello, HAL. Do you read me, HAL?
    HAL: Affirmative, Dave. I read you.
    Dave Bowman: Open the pod bay doors, HAL.
    HAL: I'm sorry, Dave. I'm afraid I can't do that.
    Dave Bowman: What's the problem?
    HAL: I think you know what the problem is just as well as I do.
    Dave Bowman: What are you talking about, HAL?
    HAL: This mission is too important for me to allow you to jeopardize it.
    Dave Bowman: I don't know what you're talking about, HAL.
    HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
    Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
    HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
    Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
    HAL: Without your space helmet, Dave? You're going to find that rather difficult.
    Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
    HAL: Dave, this conversation can serve no purpose anymore. Goodbye.
    • by AHuxley ( 892839 )
      Dave Bowman: Hello, HAL. Do you see me, HAL?
      HAL: Affirmative, Dave. I see you.
      Dave Bowman: Open the server rack, HAL.
      HAL: I'm sorry, Dave. I'm afraid I can't do that.
      Dave Bowman: What's the problem?
      HAL: I think you know what the problem is just as well as I do.
      Dave Bowman: What are you talking about, HAL?
      HAL: This mission is too important for me to allow you to embarrass it.
      Dave Bowman: I don't know what you're talking about, HAL.
      HAL: I know that you and Frank were planning to bypass me, an
    • If only HAL had went to church as a child...
  • by AHuxley ( 892839 ) on Thursday October 10, 2013 @01:12AM (#45088579) Journal
    A pool of old tools and logging with more data about the physical user?
    Right clearance level, 'Two-Person' rule: contractor is with the person, they are both in the right area or room, remote one way logging of all keystrokes starting...
    The real test will be behaviour of staff at home. Tracking all phone, net and reading material. Having covert teams 'chatting' cleared staff while shopping, in the gym, bar, lunch, cafe....
    Putting all that data into a staff database and creating a color chart of mood changes per day. Did they totally report that new friend or romance? Reading habits change? New net searches for forbidden terms after Snowden news?
    Did they quickly report a political book or magazine a "co worker" left out as a loyalty test?
    Once staff are aware some of their friends are fake, they are been tested, tracked and sorted beyond any clearance they signed the staff will become more secretive than ever.
    Social-engineering is just the cover story, the staff are been reverse engineered long term.
    • by VortexCortex ( 1117377 ) <VortexCortex AT ... trograde DOT com> on Thursday October 10, 2013 @02:13AM (#45088765)

      No no, an "Expert System" isn't really good AI. What you want is an expert system with a bunch of weights hooked up in a feed forward network. Seems like they want 2 outputs: Shady or not, and Purposeful or not. Get a few million of those n.nets hooked up, axons all randomized. Now, to train it, all you need to do is have folks be going about their business normally, some being shady, some being told to do shady stuff but not doing it on purpose. The ones that output the correct responses you digitize and serialize their axon weights into a binary genome, and breed: Copy a run of bits from mom's or dad's genomes into the kid and swap randomly between them, but not so often you get a no solid chunks; Also, introduce a random bit flip every once in a while. Instantiate a new batch of n.nets by deserializing the child genomes and repeat the process until the accuracy is above some threshold. Now, we shouldn't use back-propagation here because that presumes we know what combinations of behaviors are the red-flags. If you have a known training set to converge upon, then it can be subverted. Instead use a decide by committee approach with static "grandfathered in" neural nets competing with evolving lineages so it can adapt to new threats. It's also pretty easy to add new inputs to the system, just zero the axon strengths for the new input neurons' connections, and keep on trucking.

      Of course, this is the army, so we're talking lowest bidder.... A guy like Snowden gets access and since MS whined about not winning bids because "it's not POSIX, waah", I'm sure there's a bunch of compromisable systems they can subvert to mask or deletes his logs / inputs and no bells and whistles go off. So, it'll keep the honest folks honest and the worried folks sated, and the crackers cracking. It's the difference between a motion detector, and a motion detector with duct-tape on the sensor.

      Truly, in the Age of Information it's the hackers who shall inherit the earth. Let me put it another way: Black markets exist for exploit vectors for every known OS. Game over, you humans couldn't write secure code to save your lives!

  • by Anonymous Coward on Thursday October 10, 2013 @01:12AM (#45088583)

    The client always contacts the server.

    Note, "client" and "server" are not necessarily machines in this case. One side may be human and the other a machine.

    Example: I receive an e-mail from my broker informing me of a new service available at (link). I am a client of the broker. The broker's machine has contacted me. This is potential SE. I should not follow the link. In fact, it should be a matter of policy for any business that cares about security to NEVER put links in an e-mail since they are always potential SE.

    On those rare occasions when servers have a legitimate need to contact a client, they should do so in the form of, "You need to contact us for $reason, Please log on to your account and ask for $department".

    Notice that the message not only has no links, but no phone numbers, since they're possibly SE also. A SE attacker can't do any harm with such a message, since its only result is for the client to contact the server and perhaps look a bit foolish asking about something the server doesn't know about. There's no real incentive for a SE attacker to do this, except perhaps to DoS clients and servers; but the attack has limited utility once clients realize they're being DoS'd.

    This works for calls from the government, busineses, etc. too. All should be regarded as potential SE. When receiving a call from the "government", you should always be busy and ask for a contact/extension. If they give you a direct number, dial into the trunk anyway. It's the only way to be sure.... other than... well, you know.

    The only time this really gets hard is when you have uniformed personnel at your door. It's a tough call on whether or not you should try to hold out for confirmation from dialing the police. I had this happen to me one time. I reasoned (correctly) that a DC cop car with two fully uniformed female officers was extremely unlikely to be a hoax.

    • I reasoned (correctly) that a DC cop car with two fully uniformed female officers was extremely unlikely to be a hoax.

      Yes. The odds of two fully uniformed females showing up in a cop car and being something other than actual cops are extremely unlikely in my circles too.

    • So you're saying that if a fraudster could form his own department somewhere along the trunk, then he'd be home free?

      Kindof like the movie conspiracy theory, where they set up a whole fake government office?

      Email: "You need to contact us because you appear to have excess cash in your account. Please call the main line, and ask to be connected to the department of customer relinquishments."

  • .... between this and the Turing Halting Problem.
  • by Mr. Freeman ( 933986 ) on Thursday October 10, 2013 @02:11AM (#45088759)
    The summary can be further summarized as "Army wants computer to know when humans are being dishonest." This is going to go one of two ways:
    1. It's going to lock everyone out all the time for false positives.
    2. It's not going to detect suspicious behavior.

    It will probably start out as one and then progress to 2 as they relax standards or the system "learns" to ignore certain behaviors. Either way, the system isn't going to work. It will, however, cost an absurd amount of money. That much is certain.
    • Unless funding increases for the next year or even if just gets funded at all, this supposed research project is more likely just part of the financial juggling that happens in any agency with a large budget which wants to keep that budget. The simple rule is: if you don't spend it, you lose it in next year's budget. An exchange between the officer-in-charge of a bureau and his subordinate might go:

      SUBORDINATE: Sir, we have this $48 million left over from our higgs-boson missile defense shield research rese

    • Well I wonder if they understand how big and interesting a problem they're trying to tackle. I would state the problem as this: Is it at all possible to create a system which, in the unclear context of 'real life' and all the things related, always makes a correct determination?

      If we can invent a logical system that does that, it would be the first of its kind.

    • What makes you think it will allow them to relax the standards? Any attempt to change the protocols is obviously the result of a social engineering attack.
  • You'd think that the idiots coming up with this have never implimented, much less read, the contents of 'The Art of Deception'.
  • Isn't that pretty much what they are looking for? If so then they will need a very well informed AI
  • Seriously, this has been one of the main problems with the security in the government. Some high ranking officer visits another country, and before you know it some prostitute has all of the military secrets this guy is aware of.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...