IDS platform uses expert-led reinforcement of learned behaviour and decision-making
Lateral thinking may still be a valuable weapon in the cyber-security battle, judging from new developments at Binghamton University in the US to build a secure method of pushing data to a mobile device, and an alternative approach patented by IBM to push âsecure notifications to mobile computing devices.
The IBM Labs idea centres on a cloud-based data facility that securely pushes data to mobiles across otherwise in-the-clear data channels. The cloud resource then auto-encrypts data that is suitably marked, before pushing it to the mobile device.
Additional layers of security are provided by cloud-to-mobile authorisation process.
By using applications that can encrypt data notifications, the cloud-linked app assigns the notifications with a unique message identifier in the cloud that is securely transmitted to a mobile device via a third-party service provider. Once the end-user's device authorises the message, the recipient can pull down and access the encrypted message content from the cloud.
"This patented invention will enable developers and service providers to design and build applications that ensure sensitive or personal information is not inadvertently exposed across mobile networks," said Benjamin Fletcher, the technology's inventor and a software engineering researcher with IBM Labs.
"Regardless of the nature of data being pushed to or from a mobile device, it should never be exposed to third-parties since they cannot always guarantee security and confidentiality to customers," he explained.
Over at Binghamton University, New York, a team of students led by Patricia Moat And Zachary Birnbaum have secured US Air Force Office of Scientific Research funding to develop a network intrusion detection that uses object access graphs, a type of heuristic analysis, to spot unusual behaviour on the IT resource.
In the analysis process, system calls accumulated under normal network operation are converted to graph components, and used as part of the IDS (Intrusion Detection System) normalcy profile.
By analysing the profiles - and comparing them against the profiles seen in previously detected attacks - the research team claims to have developed a powerful real time visualisation system that supports âexpert-led reinforcement of learned behaviour and decision making.â
The project already allows the IDS to adopt real-time changes in its profiles. Whilst heuristic analysis is nothing new in the world of security, both developments -which are a long way from commercial development, suggest that taking a different approach to security can create new security methodologies and systems that would otherwise have been overlooked.
According to Professor John Walker, a Visiting Professor with Nottingham-Trent University's School of Science and Technology, the issue of lateral thinking on the security strategy front is something he has been tracking - and using - since the 1990s. "It's not simply a case of analysing Big Data, but more about viewing the security threats against a given IT resource in a different way," he said.
"If you look at the way `Neo,' in the Matrix film observes data in the movies, you'll see he views a representation of everything as an alphanumeric stream. The analogy here isn't just about analysing the big data, but more about big thinking," he added.
This approach, says Professor Walker, is about looking at the 'Big Security Picture' as a whole. The security, he explains, is still the same, but allows the onlooker to ignore any components they feel are not relevant. "This is all about the three Bs in IT: big data, big thinking and the big picture," he explained.
Fellow Visiting Professor Peter Sommer with De Montfort University, said that the Binghamton research project into intrusion detection may be rather less novel than is being claimed.
"In essence they are using a technique called behavioural heuristics in which you try to describe particular characteristics of pre-attack events. Some researchers have tried to improve on this by using artificial intelligence techniques to identify `normal' behaviour in relation to a computer and by exception regarding everything else as `abnormal' as thus requiring attention," he said.
"The academic intrusion detection literature was full of this over 10 years ago. The challenge is the number of false positives - when the system gives you an alert which turns out to be unjustified - as well as the associated false negatives - when your system fails to recognise an attack," he added.
Sommer, who is also a data forensics specialist, went on to say that Moat and her team should perhaps be looking at the research papers produced by the symposia on Research in Attacks, Intrusions and Defences.